Monday Jul 14, 2014

Berkeley DB 12cR1 (12.1.6.1) released

New Releases of Oracle Berkeley DB

Available For Download Now
Subject:
Release Update


The newest release of Berkeley DB 12cR1 (12.1.6.1) is available now. Here is a summary of the new features:

      upgrades to in-memory OLTP throughput & performance
new
HA improvement to identify a single master in a 2 site replication group
new
HA useability improvements
new Blob support added into replication
removed need for fail check monitor process
reduced the time for a database backup
and a lot more!

Berkeley DB continues to enable the most powerful embedded database solutions

      Handle TBs of data with a 1MB library
Flexible, lightweight storage engine, small footprint
Runs on low power ARM devices to cluster of high-end servers
Over 50 open source software projects embed BDB -- check them out on Wikipedia
Completely customizable, choose from 5 different access methods
Industrial quality and battle tested with over 200 million deployments

BDB is hands-down the best edge, mobile, and embedded database available to developers.  With the flexibility to place log files and/or database in any directory, applications can easily take advantage of the IO performance of flash caches, flash disks or SSDs.


Top notch performance
:

  • Berkeley DB performs over 5 million operations a second on a 100GB database running on a 1/8 rack Exadata V2 Database Machine configured with 256GB RAM and 12 cores.
  • Berkeley DB can insert 100 thousand records in 72 milliseconds and read those records in 30 milliseconds, running on a 8 core XEON-based commodity server.  The records contain a 4 byte key and a 48 byte data value.  This was run using the benchmark described on pages 34-36 in an ebook on SQL Server 2014 from Microsoft Press.
  • To put this into perspective, we compared Berkeley DB to SQL Server 2014's In-Memory OLTP feature (code name Hekaton)  which has similar technology.  Berkeley DB, an open source product, is about 20% faster than SQL Server 2014 which takes 94 milliseconds for the same 100k insert operations on an 8 core X64 Intel commodity box with 14 GB memory.

    We are making available a benchmark program in C that can be configured to validate Berkeley DB throughput for the 100K insert test here.


What folks are saying:

Open source Fedora package maintainer, Lubomir Rintel, says "Berkeley DB has quietly served behind the scenes as the database for the RPM Package Manager.   It has proven itself time and time again as a robust and efficient storage engine.   It stores the meta information of the installed rpms.  Under heavy workloads, BDB proves itself reliable. Countless people that use popular Linux distributions have used BDB through RPM and never knew it.  With this new release,   BDB continues its tradition of being a solid storage engine"

Oracle Tape Product Manager, Dan Deppen, says "Berkeley DB is integral to Oracle StorageTek Storage Archive Manager (SAM-QFS).  We have been embedding Berkeley DB in our product for over a decade and it is vital to our disk archiving feature which is used to send files to remote data centers to enable disaster recovery.  Performance and scalability are critical because SAM-QFS supports some of the largest archive customers in the world.   HPC sites, research centers, national libraries and other customers requiring massive scalability and high reliability depend on SAM-QFS and Berkeley DB to maintain availability of their critical data."

  Oracle Identity Management Vice President, Shirish Puranik, says "Berkeley DB is a critical component of Oracle Unified Directory (OUD) and Oracle Directory Server Enterprise Edition (ODSEE).  We have been using Berkeley DB in these products as a high performance, transaction embedded database repository for several years.  Berkeley DB has exceed our expectations for performance and stability. Our Berkeley DB based products are widely deployed in production at largest telcos and financial institutions all over the world.   The improvements for BLOB support and high availability in the 6.1 release are welcome."



Software Downloads

Questions?

Please direct product questions to our Product Help Mail list for Berkeley DB. You can also email the Berkeley DB Product Management team directly.

___________________________________________________
Berkeley DB Product Management
Internal -
https://stbeehive.oracle.com/teamcollab/wiki/Berkeley+DB+Inside+Scoop
OTN - 
http://www.oracle.com/technetwork/database/berkeleydb/overview/index.html

Thursday Jun 12, 2014

Data management in unexpected places

Data management in unexpected places

When you think of network switches, routers, firewall appliances, etc., it may not be obvious that at the heart of these kinds of solutions is an engine that can manage huge amounts of data at very high throughput with low latencies and high availability.

Consider a network router that is processing tens (or hundreds) of thousands of network packets per second. So what really happens inside a router? Packets are streaming in at the rate of tens of thousands per second. Each packet has multiple attributes, for example, a destination, associated SLAs etc. For each packet, the router has to determine the address of the next “hop” to the destination; it has to determine how to prioritize this packet. If it’s a high priority packet, then it has to be sent on its way before lower priority packets. As a consequence of prioritizing high priority packets, lower priority data packets may need to be temporarily stored (held back), but addressed fairly. If there are security or privacy requirements associated with the data packet, those have to be enforced. You probably need to keep track of statistics related to the packets processed (someone’s sure to ask). You have to do all this (and more) while preserving high availability i.e. if one of the processors in the router goes down, you have to have a way to continue processing without interruption (the customer won’t be happy with a “choppy” VoIP conversation, right?). And all this has to be achieved without ANY intervention from a human operator – the router is most likely to be in a remote location – it must JUST CONTINUE TO WORK CORRECTLY, even when bad things happen.

How is this implemented? As soon as a packet arrives, it is interpreted by the receiving software. The software decodes the packet headers in order to determine the destination, kind of packet (e.g. voice vs. data), SLAs associated with the “owner” of the packet etc. It looks up the internal database of “rules” of how to process this packet and handles the packet accordingly. The software might choose to hold on to the packet safely for some period of time, if it’s a low priority packet.

Ah – this sounds very much like a database problem. For each packet, you have to minimally

· Look up the most efficient next “hop” towards the destination. The “most efficient” next hop can change, depending on latency, availability etc.

· Look up the SLA and determine the priority of this packet (e.g. voice calls get priority over data ftp)

· Look up security information associated with this data packet. It may be necessary to retrieve the context for this network packet since a network packet is a small “slice” of a session. The context for the “header” packet needs to be stored in the router, in order to make this work.

· If the priority of the packet is low, then “store” the packet temporarily in the router until it is time to forward the packet to the next hop.

· Update various statistics about the packet.

In most cases, you have to do all this in the context of a single transaction. For example, you want to look up the forwarding address and perform the “send” in a single transaction so that the forwarding address doesn’t change while you’re sending the packet. So, how do you do all this?

Berkeley DB is a proven, reliable, high performance, highly available embeddable database, designed for exactly these kinds of usage scenarios. Berkeley DB is a robust, reliable, proven solution that is currently being used in these scenarios.

First and foremost, Berkeley DB (or BDB for short) is very very fast. It can process tens or hundreds of thousands of transactions per second. It can be used as a pure in-memory database, or as a disk-persistent database. BDB provides high availability – if one board in the router fails, the system can automatically failover to another board – no manual intervention required. BDB is self-administering – there’s no need for manual intervention in order to maintain a BDB application. No need to send a technician to a remote site in the middle of nowhere on a freezing winter day to perform maintenance operations.

BDB is used in over 200 million deployments worldwide for the past two decades for mission-critical applications such as the one described here. You have a choice of spending valuable resources to implement similar functionality, or, you could simply embed BDB in your application and off you go! I know what I’d do – choose BDB, so I can focus on my business problem. What will you do?

Wednesday Jun 27, 2012

Yammer, Berkeley DB, and the 3rd Platform

If you read the news, you know that the latest high-profile social media acquisition was just confirmed. Microsoft has agreed to acquire Yammer for 1.2 billion. Personally, I believe that Yammer’s amazing success can be mainly attributed to their wise decision to use Berkeley DB Java Edition as their backend data store. :-)

I’m only kidding, of course. However, as Ryan Kennedy points out in the video I recently blogged about, BDB JE did provide the right feature set that allowed them to reliably grow their business. Which in turn allowed them to focus on their core value add. As it turns out, their ‘add’ is quite valuable!

This actually makes sense to me, a lot more sense than certain other recent social acquisitions, and here’s why. Last year, IDC declared that we are entering a new computing era, the era of the “3rd Platform.” In case you’re curious, the first 2 were terminal computing and client/server computing, IIRC. Anyway, this 3rd one is more complicated. This year, IDC refined the concept further. It now involves 4 distinct buzzwords: cloud, social, mobile, and big data.

Yammer is a social media platform that runs in the cloud, designed to be used from mobile devices. Their approach, using Berkeley DB Java Edition with High Availability, qualifies as big data. This means that Yammer is sitting right smack in the center if IDC’s new computing era. Another way to put it is: the folks at Yammer were prescient enough to predict where things were headed, and get there first.

They chose Berkeley DB to handle their data. Maybe you should too!

Tuesday Jun 05, 2012

Highlights from recent Yammer video

A few weeks back, Ryan Kennedy of Yammer gave a talk about Berkeley DB Java Edition. You can find it posted here on Alex Popescu's Blog, or go directly to the video post itself. It was full of useful nuggets of information, such as why they chose to use BDB JE, performance, and some tips & tricks at the end. At over 40 minutes, the video is quite long. Ryan is an entertaining speaker, so I suggest you watch all of it. But if you only have time for the highlights, here are some times you can sync to:

 06:18 hear the Berkeley DB JE features that caused Yammer select it, including:

  • replication
  • auto leader election, failover
  • configurable durability and consistency guarantees

23:10 System performance characteristics

35:08 Check out the tips and tricks for using Berkeley DB JE

I know the Berkeley DB development team is very pleased that BDB JE is working out well for Yammer. We definitely encourage others out there to take note of this success, especially if your requirements are similar to Yammer's (which Ryan outlines at the beginning of his talk)

Monday Feb 27, 2012

Now you can build Berkeley DB into your Android apps

I want to make everyone aware of a small change we made in the last release that promises to make a big difference for our customers using Android.

We added support for the Android platform some time back. However, there was a caveat: Berkeley DB had to be integrated at the OS level, replacing the SQLite routines that usually ship with Android. Having Berkeley DB built into the OS provided some advantages. But it also meant that customers were unable to build a BDB-enabled app that could be deployed to any generic Android device. Also some customers were hesitant to make OS modifications themselves.

In our latest release, Berkeley DB 5.3, we have added the capability to use BDB on a per-application basis. This means you can build your application to use Berkeley DB, and the library routines will be bundled in when you package everything up. The result is an application that can take advantage of Berkeley DB’s strengths on any Android device. We've even included instructions on how to achieve this in our online documentation.

Berkeley DB supports a wide array of mobile platforms. Starting with Oracle’s own embedded Java, which many forget still dominates the feature phone market. BDB also supports Android and iOS, the two platforms that continue to own the smartphone marketplace. This latest enhancement will make it even easier for Android developers to use BDB. Finally, we offer the option to choose between a SQLite-compatible SQL API, and our traditional key/value API (for you NoSQL fans out there). Our combination of features and platform support make Berkeley DB the best choice for anyone who needs an enterprise-grade data store on a mobile platform.

Tuesday Jul 12, 2011

Customer Highlight: AdaptiveMobile

Last week AdaptiveMobile announced they signed a 3 year agreement to embed Oracle Berkeley DB in their Network Protection Platform, a key component of their product portfolio.

AdaptiveMobile is the world leader in mobile security, enabling trusted networks for the world’s largest operator groups and protecting one in six subscribers globally. AdaptiveMobile provides operators with the most comprehensive network-based security solutions enabling them to protect their consumer and enterprise customers against the growing threat of mobile abuse. From the announcement: “AdaptiveMobile selected Oracle Berkeley DB for its outstanding data retrieval speeds, reliability, scalability and availability and its integration with the company’s existing application environment.”

Here is a great quote from Gareth Maclachlan, AdaptiveMobile Chief Operating Officer: “Oracle’s Berkeley DB has enabled us to develop a solution that resolves the content and access control challenges of both network operators and their customers without impacting the user experience. “Working with Oracle has allowed us to reduce our development time and cut our total cost of ownership for network operators by helping us reduce our hardware and administration costs.”

For more information, you can see the announcement, or learn more about AdaptiveMobile’s products.

Wednesday Jun 22, 2011

New Release of Oracle Berkeley DB

We are pleased to announce that a new release of Oracle Berkeley DB, version 11.2.5.2.28, is available today.

Our latest release includes yet more value added features for SQLite users, as well as several performance enhancements and new customer-requested features to the key-value pair API.  We continue to provide technology leadership, features and performance for SQLite applications.  This release introduces additional features that are not available in native SQLite, and adds functionality allowing customers to create richer, more scalable, more concurrent applications using the Berkeley DB SQL API.

This release is compelling to Oracle’s customers and partners because it:

  • delivers a complete, embeddable SQL92 database
  • as a library under 1MB size
  • drop-in API compatible with SQLite version 3
  • no-oversight, zero-touch database administration
  • industrial quality, battle tested Berkeley DB B-TREE for concurrent transactional data storage

New Features Include:

  • MVCC support for even higher concurrency
  • direct SQL support for HA/replication
  • transactionally protected Sequence number generation functions
    • lower memory requirements, shared memory regions and faster/smaller memory on startup
  • easier B-TREE page size configuration with new ''db_tuner" utility

New Key-Value API Features Include:

  • HEAP access method for constrained disk-space applications (key-value API)
  • faster QUEUE access method operations for highly concurrent applications -- up 2-3X faster! (key-value API)
  • new X/open compliant XA resource manager, easily integrated with Oracle Tuxedo (key-value API)
    • additional HA/replication management and communication options (key-value API)

and a lot more!

BDB is hands-down the best edge, mobile, and embedded database available to developers.

Downloads available today on the Berkeley DB download page

Product Documentation

Tuesday Mar 29, 2011

Ubiquitous Mobile Applications


It goes without saying that smart phones are already very popular and enjoying rapid growth. But a few things finally happened in the last year that people have been predicting for a long time, things that could make smartphones and mobile apps an even bigger part of our lives.
First off, in what many consider to be a bold move from a normally conservative company, GM has already released mobile apps that allow many of their cars to be remotely monitored, unlocked, and even started. So as long as you have a high degree of confidence in your smartphone's battery life, you don't need your car keys anymore.

Next, we have Starbucks, who have quietly sidestepped the ongoing mobile payments dispute by launching a barcode based payments app that doesn't require any special technology. Other large retailers are probably watching this closely, and if the big cell phone companies don't get their acts together, they might just end up missing the boat again. In any case, many industry observers feel that the Starbucks system could be the spark that sets off an explosion of mobile payment systems, from McDonald's to Neiman Marcus, and everywhere in between. Here is some more detail on the Starbucks solution, and for an industry analysis click here.

But I saved the biggest one for last. Are you ready? Take a deep breath, and then read this: Smartphones outsold PCs for the first time, Q4 2010. This milestone happened much sooner than anyone expected, thanks mostly to new Android activation numbers so high they're almost hard to believe. At the start of 2011 people were talking about activations in excess of 125k units per day. Currently the number being tossed around is 300k units per day. If Android can keep it up, that platform alone will outsell PCs in 2011. Folks, this could be the end of an era.

When we combine these milestones, the smartphone appears to poised to become a sort of ultimate swiss army knife; a single device that does everything. The thought of it is exciting; imagine the convenience! Alas, as with all great things in life, there is a downside. Money, privacy, communication, and even physical security could be compromised if the software or data on the device is not secure.

The average person has at least 3 things with them when they leave the house: wallet, keys, and cell phone. Today, if one of the three things were to be lost, that would be bad. It would be a huge inconvenience, in the best case. However, when a cell phone is lost today, the unfortunate person is not normally forced to cancel their credit cards and reprogram their car locks. Worse than that, if we envision the future that the stories listed above point to, there is a potential for any or all of the three to be stolen electronically, by a thief far away, even in a different country. That should be a sobering prospect, both for consumers and the various companies providing services to them. It is easy to tout the benefits of technology convergence; it's a fun and exciting topic. But as technology providers, it is our duty to also consider the potential drawbacks.

In order to keep everyone safe, sensitive mobile data needs to be securely stored on the local device, and transmitted reliably to each company's data vaults. Those same companies need to be able to determine, with 100% accuracy, whether a certain request is coming from an authorized device or an imposter. This could be a mobile purchase, a request to unlock your car, or even your home.

Two critical components of a safe, trustworthy solution for storing and syncing sensitive mobile data are Oracle's Berkeley DB and Database Mobile Server. Most of the world's large enterprises already store their critical data in Oracle Database. Berkeley DB is a stable, mature product with over 10 years proven history. It is ideally positioned to be the mobile data store to complement Oracle Database. It is secure in the traditional sense, meaning it can protect mobile data from malicious intent, and in the database architecture sense as well, with full transactional guarantees. Database Mobile Server provides the final piece to form a complete solution: the capability to synchronize your data between the mobile devices and your Oracle backend, and manage the mobile application, data, and even the device itself if desired.
If you're considering a mobile application that will access sensitive data, data that you already trust Oracle to store in your backend infrastructure, Berkeley DB and Database Mobile Server are the best choices to handle it.

Sunday Jan 30, 2011

Is Berkeley DB a NoSQL solution?

Berkeley DB is a library. To use it to store data you must link the library into your application. You can use most programming languages to access the API, the calls across these APIs generally mimic the Berkeley DB C-API which makes perfect sense because Berkeley DB is written in C. The inspiration for Berkeley DB was the DBM library, a part of the earliest versions of UNIX written by AT&T's Ken Thompson in 1979. DBM was a simple key/value hashtable-based storage library. In the early 1990s as BSD UNIX was transitioning from version 4.3 to 4.4 and retrofitting commercial code owned by AT&T with unencumbered code, it was the future founders of Sleepycat Software who wrote libdb (aka Berkeley DB) as the replacement for DBM. The problem it addressed was fast, reliable local key/value storage.

At that time databases almost always lived on a single node, even the most sophisticated databases only had simple fail-over two node solutions. If you had a lot of data to store you would choose between the few commercial RDBMS solutions or to write your own custom solution. Berkeley DB took the headache out of the custom approach. These basic market forces inspired other DBM implementations. There was the "New DBM" (ndbm) and the "GNU DBM" (GDBM) and a few others, but the theme was the same. Even today TokyoCabinet calls itself "a modern implementation of DBM" mimicking, and improving on, something first created over thirty years ago. In the mid-1990s, DBM was the name for what you needed if you were looking for fast, reliable local storage.

Fast forward to today. What's changed? Systems are connected over fast, very reliable networks. Disks are cheap, fast, and capable of storing huge amounts of data. CPUs continued to follow Moore's Law, processing power that filled a room in 1990 now fits in your pocket. PCs, servers, and other computers proliferated both in business and the personal markets. In addition to the new hardware entire markets, social systems, and new modes of interpersonal communication moved onto the web and started evolving rapidly. These changes cause a massive explosion of data and a need to analyze and understand that data. Taken together this resulted in an entirely different landscape for database storage, new solutions were needed.

A number of novel solutions stepped up and eventually a category called NoSQL emerged. The new market forces inspired the CAP theorem and the heated debate of BASE vs. ACID. But in essence this was simply the market looking at what to trade off to meet these new demands. These new database systems shared many qualities. They were designed to address massive amounts of data, millions of requests per second, and scale out across multiple systems.

The first large-scale and successful solution was Dynamo, Amazon's distributed key/value database. Dynamo essentially took the next logical step and added a twist. Dynamo was to be the database of record, it would be distributed, data would be partitioned across many nodes, and it would tolerate failure by avoiding single points of failure. Amazon did this because they recognized that the majority of the dynamic content they provided to customers visiting their web store front didn't require the services of an RDBMS. The queries were simple, key/value look-ups or simple range queries with only a few queries that required more complex joins. They set about to use relational technology only in places where it was the best solution for the task, places like accounting and order fulfillment, but not in the myriad of other situations.

The success of Dynamo, and it's design, inspired the next generation of Non-SQL, distributed database solutions including Cassandra, Riak and Voldemort. The problem their designers set out to solve was, "reliability at massive scale" so the first focal point was distributed database algorithms. Underneath Dynamo there is a local transactional database; either Berkeley DB, Berkeley DB Java Edition, MySQL or an in-memory key/value data structure. Dynamo was an evolution of local key/value storage onto networks. Cassandra, Riak, and Voldemort all faced similar design decisions and one, Voldemort, choose Berkeley DB Java Edition for it's node-local storage. Riak at first was entirely in-memory, but has recently added write-once, append-only log-based on-disk storage similar type of storage as Berkeley DB except that it is based on a hash table which must reside entirely in-memory rather than a btree which can live in-memory or on disk.

Berkeley DB evolved too, we added high availability (HA) and a replication manager that makes it easy to setup replica groups. Berkeley DB's replication doesn't partition the data, every node keeps an entire copy of the database. For consistency, there is a single node where writes are committed first - a master - then those changes are delivered to the replica nodes as log records. Applications can choose to wait until all nodes are consistent, or fire and forget allowing Berkeley DB to eventually become consistent. Berkeley DB's HA scales-out quite well for read-intensive applications and also effectively eliminates the central point of failure by allowing replica nodes to be elected (using a PAXOS algorithm) to mastership if the master should fail. This implementation covers a wide variety of use cases. MemcacheDB is a server that implements the Memcache network protocol but uses Berkeley DB for storage and HA to replicate the cache state across all the nodes in the cache group. Google Accounts, the user authentication layer for all Google properties, was until recently running Berkeley DB HA. That scaled to a globally distributed system. That said, most NoSQL solutions try to partition (shard) data across nodes in the replication group and some allow writes as well as reads at any node, Berkeley DB HA does not.

So, is Berkeley DB a "NoSQL" solution? Not really, but it certainly is a component of many of the existing NoSQL solutions out there. Forgetting all the noise about how NoSQL solutions are complex distributed databases when you boil them down to a single node you still have to store the data to some form of stable local storage. DBMs solved that problem a long time ago. NoSQL has more to do with the layers on top of the DBM; the distributed, sometimes-consistent, partitioned, scale-out storage that manage key/value or document sets and generally have some form of simple HTTP/REST-style network API. Does Berkeley DB do that? Not really.

Is Berkeley DB a "NoSQL" solution today? Nope, but it's the most robust solution on which to build such a system. Re-inventing the node-local data storage isn't easy. A lot of people are starting to come to appreciate the sophisticated features found in Berkeley DB, even mimic them in some cases. Could Berkeley DB grow into a NoSQL solution? Absolutely. Our key/value API could be extended over the net using any of a number of existing network protocols such as memcache or HTTP/REST. We could adapt our node-local data partitioning out over replicated nodes. We even have a nice query language and cost-based query optimizer in our BDB XML product that we could reuse were we to build out a document-based NoSQL-style product. XML and JSON are not so different that we couldn't adapt one to work with the other interchangeably. Without too much effort we could add what's missing, we could jump into this No SQL market withing a single product development cycle.

Why isn't Berkeley DB already a NoSQL solution? Why aren't we working on it? Why indeed...

Thursday Jan 27, 2011

How to use Berkeley DB's non-SQL, Key/Value API to implement "SELECT * FROM table WHERE X = key ORDER BY Y"

In this post we explore how Berkeley DB's key/value API can support complex SQL-like queries without having to use the SQL API itself. For this example we will consider the query, "SELECT * FROM table WHERE X = key ORDER BY Y". To perform this query we will be using a composite secondary index along side the primary database table to construct the join.
First of all, it's important to think of Berkeley DB as nearly identical to the storage engine underneath any RDBMS. In fact for many years Berkeley DB was the first transactional data storage engine integrated with MySQL, we predate InnoDB in that regard. As such, Berkeley DB has API calls and access methods that can support any RDBMS query. This is further demonstrated by the new Berkeley DB SQL API. This new API is simply the marriage of the SQL processing layer of SQLite and the B-Tree storage of Berkeley DB.
In this example we will set aside the fact that Berkeley DB has a SQL API and show you how to use it as a customizable storage engine that executes the equivalent of the example query. In this case your application has to provide the code that accesses the data store with an appropriate sequence of steps that will implement the behavior that you want.

If you have two indices in SQL, each on a single column (call them X and Y), and you do: SELECT * FROM table WHERE X = key ORDER BY Y; then there are three plausible query plans:


  1. scan the whole table, ignore both indices, filter by X = key then sort by Y;

  2. use the index on Y to scan all rows in the required order, filter by X = key;

  3. use the index on X, find the matching rows, then sort by Y.


There are cases where (1) would be fastest, because it has all of the columns from one scan (the other query plans will do random lookups on the primary for each row). This assumes that the data can fit into memory and the sort is fast.
Query plan (2) will be fastest if the selectivity is moderate to high, looking up rows in the main table is fast, and sorting the rows is very slow for some reason (e.g., some complex collation).
Query plan (3) will be fastest if the selectivity is small (only a small percentage of the rows in the table matches). This should be the best case for us, making it the best choice in a Berkeley DB key/value application.
The optimal plan would result from having a composite index on (X, Y), which can return just the desired rows in the desired order. Of course, it does cost additional time and space to maintain that index. But note that you could have this index instead of a simple index on X: it can be used in any query the simple index could be used in.
Records in Berkeley DB are (key, value) pairs and Berkeley DB supports only a few logical operations on these records. They are:

  • Insert a record in a table.

  • Delete a record from a table.

  • Find a record in a table by looking up its key (or by positioning a cursor).

  • Update a record that has already been found.


Notice that Berkeley DB never operates on the value part of a record. Values are simply payload, to be stored with keys and reliably delivered back to the application on demand. Both keys and values can be arbitrary byte strings, either fixed-length or variable-length.
So, in case of a SELECT * FROM X WHERE id=Y ORDER BY Z query, our suggestion, from Berkeley DB's point of view, would be for you to use a composite index (as it would be in SQL), where a string created as X_Y should do the trick, as explained in the following scenario.
Primary:

X Y
1 10 abc
2 10 aab
3 20 bbc
4 10 bba
5 20 bac
6 30 cba

Secondary:

10_aab 2
10_abc 1
10_bba 4
20_bac 5
20_bbc 3
30_cba 6

If the query looks like this:
SELECT * FROM primarydb WHERE X = 10 ORDER by Y
the application can run a cursor on the secondary and begin the loop with the DB_SET_RANGE flag on 10. When iterating with a cursor executing DB_NEXT, this will return:

2 10 aab
1 10 abc
4 10 bbc

The application must check for the end of the range inside the loop, in this case it should stop when it hits 20_bac.
As in SQL, retrieving by a secondary key is remarkably similar to retrieving by a primary key and the Berkeley DB call will look similar to its primary equivalent.

 

The approach should perform very well and is likely to be the fastest solution. Of course, you can try other methods and tune the database for performance at a later time, after you have the functionality in place. The first and most critical performance factor is almost always cache size. First increase cache size and measure your performance to find an optimal amount for your system and use case. Second, you might test with a bigger database page size that has a more direct relationship to the average size of your keys and values. Lastly, consider the various configuration options available within the transactional subsystem (log buffer size, trickle thread, etc.).

If you are not very familiar with how to implement the above in BDB, please read the Guide to Oracle Berkeley DB for SQL Developers.

You will also need to be familiar with secondary indexes, cursor operations such as DBcursor->get() and DB_SET_RANGE.

About

Information about Berkeley DB products directly from the people who build them.

Search

Archives
« September 2015
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today