Wednesday Jun 01, 2011

Embedded Systems Conference San Jose

The annual Embedded Systems Conference in San Jose was held at the beginning of May. This was Oracle’s first year at the conference, and we did get visitors who were surprised to see us there. However some people, myself included, think we're going to see increased convergence between enterprise and embedded in the coming years. I know we're not alone, because a certain other big name in enterprise systems had a booth right next to ours!

Since embedded is not a topic everyone is familiar with, I want to give a little insight into this conference and the embedded space in general, as I think it is potentially an important growth area for products like Berkeley DB. Since I was an embedded developer myself in a past life, this is familiar territory for me.

Wikipedia defines embedded systems as "a computer system designed to do one or a few dedicated and/or specific functions." It is important to note that embedded is generally considered to be distinct from mobile. Mobile platforms are typically derived in some way from desktop platforms such as Linux, Windows, OSX, and are often more general purpose devices. Before the advent of cheap, high-res, general purpose LCD displays, having a display in your device meant a Cathode Ray Tube (CRT). Embedded devices were traditionally ‘headless,’ meaning they had no display and no generic input device such as a keyboard. Because of this, in the early days developers would commonly develop on desktop machines using a cross toolchain. A cross toolchain is a set of tools designed to build software on a target embedded platform, which was a completely different hardware architecture and OS from the desktop development platform. Nowadays, many embedded platforms are powerful enough that they can run their own toolchains. Such was the case with our demo, more on that below.

Another common aspect of an embedded system is “real time” requirements. A simplified definition would be if a given operation does not complete by a certain time, it’s just as bad as not finishing at all. Real Time Operating Systems, or RTOSes, can provide guarantees about when operations will finish. Real time embedded devices are still quite prevalent in some industries, including military, aviation, industrial manufacturing, and networking. The embedded space has certainly been encroached on by the rise of mobile, but as long as we have mission critical devices there will continue to be a requirement for embedded devices.

Now back to the Embedded Systems Conference. Booth traffic was high, we were averaging about 1 visitor per minute the nearly the whole time I was there. I attribute this partly to curiosity, but mostly to our great giveaways! They did their job, I talked to a number of people who ended up having a genuine interest in what we were showing, and were initially attracted to the booth by our swag. Also we held a drawing for an iPad, which brought a ton of people to register.

Our demo was a temperature sensor attached to a small device called a SheevaPlug, which is a general purpose embedded development device from Marvell. By embedded standards, the SheevaPlug is a very powerful device, and we were able to develop directly on it. The idea behind the demo was that the device represented one of many nodes in a sensor network. Some real world examples of this include weather stations, or monitoring conditions inside laboratories or industrial facilities. Our demo showed the system collecting temperature data, which was then uploaded to Oracle Database. All of this was running on top of Java SE Embedded. The demo was well received. Nearly everyone who listened to me present agreed that the sync functionality would be useful to them, or useful in general if they didn’t need it themselves.

The main purpose of our presence at ESC was to showcase the power, ease of use, and versatility of Java Embedded. When you combine that Berkeley DB and Oracle Database Lite Mobile Server, you get a system that has out of the box capability to move data to and from enterprise storage systems. After a few simple configuration steps, the data stored on the local Berkeley DB or SQLite data store is connected to the enterprise backend. This is a potent combination of features, and one that we feel will be in high demand in the coming years, as M2M and embedded solutions continue to proliferate.

Tuesday Mar 29, 2011

Ubiquitous Mobile Applications


It goes without saying that smart phones are already very popular and enjoying rapid growth. But a few things finally happened in the last year that people have been predicting for a long time, things that could make smartphones and mobile apps an even bigger part of our lives.
First off, in what many consider to be a bold move from a normally conservative company, GM has already released mobile apps that allow many of their cars to be remotely monitored, unlocked, and even started. So as long as you have a high degree of confidence in your smartphone's battery life, you don't need your car keys anymore.

Next, we have Starbucks, who have quietly sidestepped the ongoing mobile payments dispute by launching a barcode based payments app that doesn't require any special technology. Other large retailers are probably watching this closely, and if the big cell phone companies don't get their acts together, they might just end up missing the boat again. In any case, many industry observers feel that the Starbucks system could be the spark that sets off an explosion of mobile payment systems, from McDonald's to Neiman Marcus, and everywhere in between. Here is some more detail on the Starbucks solution, and for an industry analysis click here.

But I saved the biggest one for last. Are you ready? Take a deep breath, and then read this: Smartphones outsold PCs for the first time, Q4 2010. This milestone happened much sooner than anyone expected, thanks mostly to new Android activation numbers so high they're almost hard to believe. At the start of 2011 people were talking about activations in excess of 125k units per day. Currently the number being tossed around is 300k units per day. If Android can keep it up, that platform alone will outsell PCs in 2011. Folks, this could be the end of an era.

When we combine these milestones, the smartphone appears to poised to become a sort of ultimate swiss army knife; a single device that does everything. The thought of it is exciting; imagine the convenience! Alas, as with all great things in life, there is a downside. Money, privacy, communication, and even physical security could be compromised if the software or data on the device is not secure.

The average person has at least 3 things with them when they leave the house: wallet, keys, and cell phone. Today, if one of the three things were to be lost, that would be bad. It would be a huge inconvenience, in the best case. However, when a cell phone is lost today, the unfortunate person is not normally forced to cancel their credit cards and reprogram their car locks. Worse than that, if we envision the future that the stories listed above point to, there is a potential for any or all of the three to be stolen electronically, by a thief far away, even in a different country. That should be a sobering prospect, both for consumers and the various companies providing services to them. It is easy to tout the benefits of technology convergence; it's a fun and exciting topic. But as technology providers, it is our duty to also consider the potential drawbacks.

In order to keep everyone safe, sensitive mobile data needs to be securely stored on the local device, and transmitted reliably to each company's data vaults. Those same companies need to be able to determine, with 100% accuracy, whether a certain request is coming from an authorized device or an imposter. This could be a mobile purchase, a request to unlock your car, or even your home.

Two critical components of a safe, trustworthy solution for storing and syncing sensitive mobile data are Oracle's Berkeley DB and Database Mobile Server. Most of the world's large enterprises already store their critical data in Oracle Database. Berkeley DB is a stable, mature product with over 10 years proven history. It is ideally positioned to be the mobile data store to complement Oracle Database. It is secure in the traditional sense, meaning it can protect mobile data from malicious intent, and in the database architecture sense as well, with full transactional guarantees. Database Mobile Server provides the final piece to form a complete solution: the capability to synchronize your data between the mobile devices and your Oracle backend, and manage the mobile application, data, and even the device itself if desired.
If you're considering a mobile application that will access sensitive data, data that you already trust Oracle to store in your backend infrastructure, Berkeley DB and Database Mobile Server are the best choices to handle it.

Sunday Jan 30, 2011

Is Berkeley DB a NoSQL solution?

Berkeley DB is a library. To use it to store data you must link the library into your application. You can use most programming languages to access the API, the calls across these APIs generally mimic the Berkeley DB C-API which makes perfect sense because Berkeley DB is written in C. The inspiration for Berkeley DB was the DBM library, a part of the earliest versions of UNIX written by AT&T's Ken Thompson in 1979. DBM was a simple key/value hashtable-based storage library. In the early 1990s as BSD UNIX was transitioning from version 4.3 to 4.4 and retrofitting commercial code owned by AT&T with unencumbered code, it was the future founders of Sleepycat Software who wrote libdb (aka Berkeley DB) as the replacement for DBM. The problem it addressed was fast, reliable local key/value storage.

At that time databases almost always lived on a single node, even the most sophisticated databases only had simple fail-over two node solutions. If you had a lot of data to store you would choose between the few commercial RDBMS solutions or to write your own custom solution. Berkeley DB took the headache out of the custom approach. These basic market forces inspired other DBM implementations. There was the "New DBM" (ndbm) and the "GNU DBM" (GDBM) and a few others, but the theme was the same. Even today TokyoCabinet calls itself "a modern implementation of DBM" mimicking, and improving on, something first created over thirty years ago. In the mid-1990s, DBM was the name for what you needed if you were looking for fast, reliable local storage.

Fast forward to today. What's changed? Systems are connected over fast, very reliable networks. Disks are cheap, fast, and capable of storing huge amounts of data. CPUs continued to follow Moore's Law, processing power that filled a room in 1990 now fits in your pocket. PCs, servers, and other computers proliferated both in business and the personal markets. In addition to the new hardware entire markets, social systems, and new modes of interpersonal communication moved onto the web and started evolving rapidly. These changes cause a massive explosion of data and a need to analyze and understand that data. Taken together this resulted in an entirely different landscape for database storage, new solutions were needed.

A number of novel solutions stepped up and eventually a category called NoSQL emerged. The new market forces inspired the CAP theorem and the heated debate of BASE vs. ACID. But in essence this was simply the market looking at what to trade off to meet these new demands. These new database systems shared many qualities. They were designed to address massive amounts of data, millions of requests per second, and scale out across multiple systems.

The first large-scale and successful solution was Dynamo, Amazon's distributed key/value database. Dynamo essentially took the next logical step and added a twist. Dynamo was to be the database of record, it would be distributed, data would be partitioned across many nodes, and it would tolerate failure by avoiding single points of failure. Amazon did this because they recognized that the majority of the dynamic content they provided to customers visiting their web store front didn't require the services of an RDBMS. The queries were simple, key/value look-ups or simple range queries with only a few queries that required more complex joins. They set about to use relational technology only in places where it was the best solution for the task, places like accounting and order fulfillment, but not in the myriad of other situations.

The success of Dynamo, and it's design, inspired the next generation of Non-SQL, distributed database solutions including Cassandra, Riak and Voldemort. The problem their designers set out to solve was, "reliability at massive scale" so the first focal point was distributed database algorithms. Underneath Dynamo there is a local transactional database; either Berkeley DB, Berkeley DB Java Edition, MySQL or an in-memory key/value data structure. Dynamo was an evolution of local key/value storage onto networks. Cassandra, Riak, and Voldemort all faced similar design decisions and one, Voldemort, choose Berkeley DB Java Edition for it's node-local storage. Riak at first was entirely in-memory, but has recently added write-once, append-only log-based on-disk storage similar type of storage as Berkeley DB except that it is based on a hash table which must reside entirely in-memory rather than a btree which can live in-memory or on disk.

Berkeley DB evolved too, we added high availability (HA) and a replication manager that makes it easy to setup replica groups. Berkeley DB's replication doesn't partition the data, every node keeps an entire copy of the database. For consistency, there is a single node where writes are committed first - a master - then those changes are delivered to the replica nodes as log records. Applications can choose to wait until all nodes are consistent, or fire and forget allowing Berkeley DB to eventually become consistent. Berkeley DB's HA scales-out quite well for read-intensive applications and also effectively eliminates the central point of failure by allowing replica nodes to be elected (using a PAXOS algorithm) to mastership if the master should fail. This implementation covers a wide variety of use cases. MemcacheDB is a server that implements the Memcache network protocol but uses Berkeley DB for storage and HA to replicate the cache state across all the nodes in the cache group. Google Accounts, the user authentication layer for all Google properties, was until recently running Berkeley DB HA. That scaled to a globally distributed system. That said, most NoSQL solutions try to partition (shard) data across nodes in the replication group and some allow writes as well as reads at any node, Berkeley DB HA does not.

So, is Berkeley DB a "NoSQL" solution? Not really, but it certainly is a component of many of the existing NoSQL solutions out there. Forgetting all the noise about how NoSQL solutions are complex distributed databases when you boil them down to a single node you still have to store the data to some form of stable local storage. DBMs solved that problem a long time ago. NoSQL has more to do with the layers on top of the DBM; the distributed, sometimes-consistent, partitioned, scale-out storage that manage key/value or document sets and generally have some form of simple HTTP/REST-style network API. Does Berkeley DB do that? Not really.

Is Berkeley DB a "NoSQL" solution today? Nope, but it's the most robust solution on which to build such a system. Re-inventing the node-local data storage isn't easy. A lot of people are starting to come to appreciate the sophisticated features found in Berkeley DB, even mimic them in some cases. Could Berkeley DB grow into a NoSQL solution? Absolutely. Our key/value API could be extended over the net using any of a number of existing network protocols such as memcache or HTTP/REST. We could adapt our node-local data partitioning out over replicated nodes. We even have a nice query language and cost-based query optimizer in our BDB XML product that we could reuse were we to build out a document-based NoSQL-style product. XML and JSON are not so different that we couldn't adapt one to work with the other interchangeably. Without too much effort we could add what's missing, we could jump into this No SQL market withing a single product development cycle.

Why isn't Berkeley DB already a NoSQL solution? Why aren't we working on it? Why indeed...

Thursday Jan 27, 2011

How to use Berkeley DB's non-SQL, Key/Value API to implement "SELECT * FROM table WHERE X = key ORDER BY Y"

In this post we explore how Berkeley DB's key/value API can support complex SQL-like queries without having to use the SQL API itself. For this example we will consider the query, "SELECT * FROM table WHERE X = key ORDER BY Y". To perform this query we will be using a composite secondary index along side the primary database table to construct the join.
First of all, it's important to think of Berkeley DB as nearly identical to the storage engine underneath any RDBMS. In fact for many years Berkeley DB was the first transactional data storage engine integrated with MySQL, we predate InnoDB in that regard. As such, Berkeley DB has API calls and access methods that can support any RDBMS query. This is further demonstrated by the new Berkeley DB SQL API. This new API is simply the marriage of the SQL processing layer of SQLite and the B-Tree storage of Berkeley DB.
In this example we will set aside the fact that Berkeley DB has a SQL API and show you how to use it as a customizable storage engine that executes the equivalent of the example query. In this case your application has to provide the code that accesses the data store with an appropriate sequence of steps that will implement the behavior that you want.

If you have two indices in SQL, each on a single column (call them X and Y), and you do: SELECT * FROM table WHERE X = key ORDER BY Y; then there are three plausible query plans:


  1. scan the whole table, ignore both indices, filter by X = key then sort by Y;

  2. use the index on Y to scan all rows in the required order, filter by X = key;

  3. use the index on X, find the matching rows, then sort by Y.


There are cases where (1) would be fastest, because it has all of the columns from one scan (the other query plans will do random lookups on the primary for each row). This assumes that the data can fit into memory and the sort is fast.
Query plan (2) will be fastest if the selectivity is moderate to high, looking up rows in the main table is fast, and sorting the rows is very slow for some reason (e.g., some complex collation).
Query plan (3) will be fastest if the selectivity is small (only a small percentage of the rows in the table matches). This should be the best case for us, making it the best choice in a Berkeley DB key/value application.
The optimal plan would result from having a composite index on (X, Y), which can return just the desired rows in the desired order. Of course, it does cost additional time and space to maintain that index. But note that you could have this index instead of a simple index on X: it can be used in any query the simple index could be used in.
Records in Berkeley DB are (key, value) pairs and Berkeley DB supports only a few logical operations on these records. They are:

  • Insert a record in a table.

  • Delete a record from a table.

  • Find a record in a table by looking up its key (or by positioning a cursor).

  • Update a record that has already been found.


Notice that Berkeley DB never operates on the value part of a record. Values are simply payload, to be stored with keys and reliably delivered back to the application on demand. Both keys and values can be arbitrary byte strings, either fixed-length or variable-length.
So, in case of a SELECT * FROM X WHERE id=Y ORDER BY Z query, our suggestion, from Berkeley DB's point of view, would be for you to use a composite index (as it would be in SQL), where a string created as X_Y should do the trick, as explained in the following scenario.
Primary:

X Y
1 10 abc
2 10 aab
3 20 bbc
4 10 bba
5 20 bac
6 30 cba

Secondary:

10_aab 2
10_abc 1
10_bba 4
20_bac 5
20_bbc 3
30_cba 6

If the query looks like this:
SELECT * FROM primarydb WHERE X = 10 ORDER by Y
the application can run a cursor on the secondary and begin the loop with the DB_SET_RANGE flag on 10. When iterating with a cursor executing DB_NEXT, this will return:

2 10 aab
1 10 abc
4 10 bbc

The application must check for the end of the range inside the loop, in this case it should stop when it hits 20_bac.
As in SQL, retrieving by a secondary key is remarkably similar to retrieving by a primary key and the Berkeley DB call will look similar to its primary equivalent.

 

The approach should perform very well and is likely to be the fastest solution. Of course, you can try other methods and tune the database for performance at a later time, after you have the functionality in place. The first and most critical performance factor is almost always cache size. First increase cache size and measure your performance to find an optimal amount for your system and use case. Second, you might test with a bigger database page size that has a more direct relationship to the average size of your keys and values. Lastly, consider the various configuration options available within the transactional subsystem (log buffer size, trickle thread, etc.).

If you are not very familiar with how to implement the above in BDB, please read the Guide to Oracle Berkeley DB for SQL Developers.

You will also need to be familiar with secondary indexes, cursor operations such as DBcursor->get() and DB_SET_RANGE.

Monday Jan 24, 2011

The C++ Standard Template Library as a BDB Database (part 3)

In the second entry I showed that the Berkeley DB's C++ STL API can substitute in for in-memory data structures like vectors. Now in this final installment we'll see Berkeley DB at work storing object instance data referenced by STL structures.

Berkeley DB's C++ STL API, the dbstl, will automatically store the STL container in an on-disk database, but what about the data referenced by the elements within the STL container? You must choose how to store the values pointed to within the STL data structures so that they too can be reconstituted from the database. This means storing the actual data that the pointers point to, which isn't so hard. Let's get started.

Suppose we have a simple class like a "Car" (as below) that has an "Engine" member (a pointer) referencing an instance of the class "Engine".
<script src="https://gist.github.com/794027.js?file=stl-ex-2a.cpp"></script>

class Car {
  size_t length, width, height;
  Engine *engine;
  string model;
  // Member functions follow ...
};
class Engine {
  size_t horse_power;
  char num_cylinder;
  float displacement;
  // Member functions follow ...
};
view raw stl-ex-2a.cpp This Gist brought to you by GitHub.

 

Now we want to store the "owner name" of the Car and we do that using a map container like std::map container, here is the pseudo code:
<script src="https://gist.github.com/794032.js?file=stl-ex-2b.cpp"></script>

typedef std::map<const char *, Car> owner_car_map_t;
owner_car_map_t ocmap;
while (has more owner-car pairs to input) {
  // Accept input data for owner name and car, create car.
  cin>>owner>>len>>width>>height>>hp>>ncyl>>displ;
  Engine *pEngine = new Engine(hp, ncyl, displ);
  Car car(len, width, height, pEngine);
  ocmap[owner.c_str()] = car;
}
view raw stl-ex-2b.cpp This Gist brought to you by GitHub.

 

Note how the preceding code stores all information in memory, so it can't be automatically persisted by Berkeley DB. When using dbstl, we can persist it, as follows.
<script src="https://gist.github.com/794035.js?file=stl-ex-2c.cpp"></script>

typedef dbstl::db_map<const char *, Car> owner_car_map_t; // (1)
owner_car_map_t ocmap;
while (has more owner-car pairs to input) {
  // Accept input data for owner name and car, create car.
  cin>>owner>>len>>width>>height>>hp>>ncyl>>displ;
  Engine *pEngine = new Engine(hp, ncyl, displ);
  Car car(len, width, height, pEngine);
  ocmap[owner.c_str()] = car; // (2)
}
view raw stl-ex-2c.cpp This Gist brought to you by GitHub.

 

Now the code will store the ower name strings and Car objects into underlying Berkeley DB database as a key/data pair in (2).

In (1), though we are using a "const char*" type as key, they key string characters can be stored into underlying Berkeley DB database. This is possible because we are using a class Car rather than primitive types like int, double, etc, we don't need the ElementHolder template here. If we are storing a char* string pair in the map, we should use this type: dbstl::db_map>.

However, the Engine objects referenced by each "car.engine" member pointer are not yet properly stored into database. At this point only their memory address is stored, which is meaningless for persistence.

This is because by default the Berkeley DB STL API (dbstl) will simply copy the object to be stored using memcpy, (i.e. we are doing a shallow copy of objects) by default. Any pointer members in the object is shallow copied, the objects they refer to are not copied. This is sufficient for classes with no member pointers, i.e. when each of its instance locates on a single continous chunk of memory, but not sufficient to be completely persisted if they have pointer members.

In order to store the Engine object of each car object, we need to do deep copy of such objects, we should register the following callback functions to do so. They cooperate closely together to enable an elegant deep copy of any complex objects.
<script src="https://gist.github.com/794039.js?file=stl-ex-2d.cpp"></script>

u_int32_t CarSize(const Car &car) // (3)
{
  return sizeof(length) * 3 + model.length() + 1 +
    sizeof(*engine); // (4)
}
void CopyCar(void *dest, const Car&car) // (5)
{
  // Copy all bytes of the car object into memory chunk M
  // referenced by dest.
  // M is preallocated and just big enough because our CarSize is used
  // to measure the object size. Note the bytes
  // of the Engine object E referenced by car.engine should be copied
  // into this M, rather than the car.engine pointer value,
  // in order to deep copy E.
  char *p = (char *)dest;
  memcpy(p, car.length, sizeof(size_t));
  p += sizeof(size_t);
  memcpy(p, car.width, sizeof(size_t));
  p += sizeof(size_t);
  memcpy(p, car.height, sizeof(size_t));
  p += sizeof(size_t);
  memcpy(p, car.model.c_str(), car.model.length() + 1); // (6)
  p += car.model.length() + 1; // (6)
  memcpy(p, car.engine, sizeof(*car.engine); // (7)
}
void RestoreCar(Car &car, const void* src) // (8)
{
  // src references the memory chunk M which contains the bytes of a Car
  // object previously marshalled by CopyCar function, so we know
  // the data structure and composition of M. Thus here we can
  // un-marshal bytes stored in M to assign to each member of car.
  // Since we have data of the Engine member in M, we should create an
  // Engine object E using 'new' operator, and assign to its members
  // using bytes in M, and assign E's pointer to car.engine.
  char *p = src;
  memcpy(car.length, p, sizeof(size_t));
  p += sizeof(size_t);
  memcpy(car.width, p, sizeof(size_t));
  p += sizeof(size_t);
  memcpy(car.height, p, sizeof(size_t));
  p += sizeof(size_t);
  car.model = p; // (9)
  p += car.model.length() + 1; // (9)
  memcpy(car.engine, p, sizeof(*car.engine);
  
}
dbstl::DbstlElemTraits<Car> *pCarTraits =
  dbstl::DbstlElemTraits<Car>::instance(); // (10)
pCarTraits->set_size_function(CarSize); // (11)
pCarTraits->set_copy_function(CopyCar); // (12)
pCarTraits->set_restore_function(RestoreCar); // (13)
view raw stl-ex-2d.cpp This Gist brought to you by GitHub.

 

In (3), this function measures a Car's size in bytes, dbstl uses it to allocate just enough space to store an object's bytes. We should consider space needed for all members we want to store.

In (4) Since we want to deep copy the engine object "E" referenced by "car.engine", we should consider its size too using sizeof(*car.engine) since Engine is a simple class each of whose instance locates on a continuous memory;

And we want to store the model string's trailing '\0' character to unmarshal it easily, so we add 1 to the model.length().

In this function we return a size just big enough so that all bytes can be placed into M with no extra trash bytes left. This depends on what bytes we want to store, i.e. the CopyCar function.

In (5), this function does the marshalling work, see the comments in the function body for more information.

In (6), note how the string is copied --- we only copy its characters, ignoring all other members. And we want to copy the tailing '\0' for easier handling later.

In (7), note we are copying the Engine object rather than the car.engine pointer, in order to completely persist the car object. Since in CarSize we have allowed for the space for the Engine object, we have just enough space here.

In (8), this function does the unmarshalling work, see the comments in the function body for more information.

In (9), we are safe to do so here because we copied the trailing '\0'.

In (10) This is the global singleton for class Car where we register callback functions for dbstl to use internally to store/retrieve Car objects.

(11)(12)(13) We register callback functions like this. This way, we can do deep copy for instances of the Car class. And note that we should do (10), (11), (12), (13) actions before (1).


There are some other features in dbstl that are beyond standard C++ STL classes, which enable you to make use of Berkeley DB's advanced features such as secondary indexes, bulk retrieval (to speed up reading chunks of consecutive data items), transactions, replication, etc. Please refer to the dbstl API documentation and reference manual for details.

The full code for this example can be found here.

There are quite a lot of example code in the download package $(db)/examples_stl and $(db)/test_stl/base demonstrating these advanced features; And in $(db)/test_stl/ms_examples and $(db)/test_stl/stlport there are dbstl standard usage example code. These example code is good resource to start with.

Sunday Jan 23, 2011

The C++ Standard Template Library as a BDB Database (part 2)


In the first entry I touched on some of the features available when using Berkeley DB's C++ STL API. Now it's time to dive into more detail.

Every copy of Berkeley DB comes with this API, there is nothing extra to download or add-on. Simply download the latest version and configure it to build the C++ and STL APIs and you're ready to go.

We've worked to make it very easy to use dbstl if you already have C++ STL programming experience. It is especially easy to map your C++ STL containers existing code to dbstl. In the following sections I will list several pieces of C++ code which uses C++ STL, and show you how to convert them to use dbstl instead.

If you want to use dbstl to manage purely in-memory data structures there is very little conversion is needed from standard STL, but most of the time you will choose Berkeley DB for its persistence, concurrency and transactional recovery features. To use those features you will need to use some of the basic features of dbstl.

I. Basic Features

1. Suppose we have the following code which uses C++ STL std::vector container:
<script src="https://gist.github.com/792361.js?file=stl-ex-1a.cpp"></script>

int vector_test(int, char**)
{
  typedef std::vector<double> dbl_vec_t; // (1)
  dbl_vec_t v1; // Empty vector of doubles. (1)
  v1.push_back(32.1);
  v1.push_back(40.5);
  // The rest follows ...
view raw stl-ex-1a.cpp This Gist brought to you by GitHub.

 

This is the code to use dbstl instead, note that only lines 3 and 4 are modified.
<script src="https://gist.github.com/792358.js?file=stl-ex-1b.cpp"></script>

int vector_test(int, char**)
{
  typedef dbstl::db_vector <double, ElementHolder<double> > dbl_vec_t; // (1)
  dbl_vec_t v1; // Empty vector of doubles. (1)
  v1.push_back(32.1);
  v1.push_back(40.5);
  // The rest follows ...
view raw stl-ex-1b.cpp This Gist brought to you by GitHub.

 

The reason for the change is:

a. All dbstl classes and global functions are defined inside "dbstl" namespace.

b. For all dbstl container class templates, we must add one more type parameter ElementHolder if T is a primitive data type like int, double, char*, etc; If T is a class type, this type parameter is not needed.

c. Here we used default constructor for v1, in the dbstl case, this means an anonymous in-memory database will be created and used only by the v1 vector. You can only share the v1 vector or its database handle to share the underlying database in current process only.

Alternatively, you can create a database environment and open a database inside it explicitly and use the opened handles to create a container like the following:
<script src="https://gist.github.com/792363.js?file=stl-ex-1c.cpp"></script>

int vector_test(int, char**)
{
  typedef dbstl::db_vector <double, ElementHolder<double> > dbl_vec_t;
  DbEnv *penv = new DbEnv(DB_CXX_NO_EXCEPTIONS); // (2)
  penv->open("dbenv",
    flags | DB_CREATE | DB_INIT_MPOOL | DB_PRIVATE, 0777); // (3)
  pdb = dbstl::open_db(penv, "vector2.db",
    DB_RECNO, DB_CREATE | dboflags, 0); // (4)
  dbstl::register_db_env(penv); // (5)
  dbstl::register_db(pdb); // (6)
  dbl_vec_t v1(penv, pdb); // (7)
  v1.push_back(32.1);
  v1.push_back(40.5);
  // The rest follows ...
view raw stl-ex-1c.cpp This Gist brought to you by GitHub.

 

This snippets contains the majority of code you need to add to convert your C++ STL application into a dbstl-enabled application.

In line 3, we must create the DbEnv object using "new" operator, this is also a requirement to create a Db object; In line 4, the "flags" variable can be set to open a transactional environment, a concurrent data store environment, or simply a data store environment. Actually any valid use of Berkeley DB via its C/C++ API can be used here.
Between lines 3 and 4, you can set various flags or callback functions to configure the environment, in the same way you used Berkeley DB C++ API. There is a helper function dbstl::open_env to open an environment in one call, but with less configurations to do.

In line 5, we used a helper function dbstl::open_db to open the database, and optionally set various flags to the database. It helps you to open a database easier, though there are something you can't do with it, for example setting callback functions. So use it if it is sufficient for you, or simply open a database in the same way you use Berkeley DB C++ API.

We need to pay attention that different type of containers have different requirement to its database handles, see dbstl API documentation for details. Here db_vector requires the database to be of DB_RECNO type.

In lines 6 and 7, we must register the created database and environment handles into dbstl in each thread using the handles, see the documentation for the two register functions for details.

In line 8, we pass the database and environment handles to v1, so that v1 is backed by pdb database. Other thread of control can also open the database and access it concurrently, using dbstl, or simply using DB C/C++ API.

2. Apart from the above code to construct a dbstl container, the rest of the code does not need any modification. The following snippet can be appended to the above three snippets to be three complete functions doing basically the same thing:
<script src="https://gist.github.com/792367.js?file=stl-ex-1d.cpp"></script>

  for(dbl_vec_t::iterator itr = v1.begin(); itr != v1.end(); ++itr)
    *itr *= 2;
  for (int i = 0; i < v1.size(); i++)
    v1[i] += 3;
  v1.swap(v2); // Swap the vector's contents.
  v2 = v1; // Assign one vector to another.
  assert(v1 == v2);
  std::reverse(v1.begin(), v1.end());
  
  // More standard features follow ...
  for(dbl_vec_t::reverse_iterator ritr = v1.begin(); ritr != v1.rend(); ++ritr)
    *ritr /= 2;
  v1.pop_back();
  v1.insert(v1.begin(), 34);
  assert(v1.front() == 34);
  v2.assign(v1.begin(), v1.end());
  return 0;
}
view raw stl-ex-1d.cpp This Gist brought to you by GitHub.

 

Here in line 1, there are some more parameters in v1.begin() to control the behavior of the created iterator in dbstl. Refer to the dbstl API documentation for more information.

In 2, after this call, the new value for this element is stored into database, also true for line 5.

In line 4, you can make the v1.size() call to compute faster but not precisely. This is helpful when the database contains millions of key/data pairs. Like size(), there are some more similar methods in all container classes which have default parameters to work like C++ STL, but can be configured to work better with Berkeley DB in special situations.

In 7 the key/data pairs in v2's backing database is written to v1's backing database, after the data in v1 is truncated. Also true for line 8, where v1's key/data pairs written into v2 after data in v2 is truncated.

In line 10, almost all algorithm functions in C++ STL library can work with dbstl, because dbstl has standard iterators and containers, the default behaviors of dbstl containers and iterators follow C++ STL specifications. The exception to this fact is the "inplace_merge", "find_end" and "stable_sort" in the STL library of GCC compiler, these three functions don't work with dbstl correctly. Apart from them, all C++ STL algorithms are always applicable to dbstl.

Starting at line 14, dbstl containers and iterators have all methods that each corresponding C++ STL containers have, and each with identical default behaviors. So you can use dbstl just the same way you use C++ STL containers/iterators to access Berkeley DB.

In our next post...
Next up we'll dive even deeper into more advanced features of the dbstl API. For now if you'd like to read ahead the code is here.

Monday Nov 15, 2010

Berkeley DB TechCast Live, Watch at 10AM/PST TODAY!

Today is a big news day for Berkeley DB. We're on Oracle.com, the second banner. It's because today Dave Segleau, the Director of Product Management for Oracle Berkeley DB products, will chat with Justin Kestelyn about embedded databases and how Berkeley DB is the right technology at the right time for you edge/embedded/application needs. From cloud services to phone storage, Berkeley DB has you covered. Please join us on by watching the tech cast at 10AM/PST today.

Tuesday Nov 02, 2010

Berkeley DB Java Edition 4.1.6

Yesterday we released a new version of Berkeley DB Java Edition. This new release has some major enhancements for speed. BDB JE has always been as fast as the I/O + stable storage (disk) system for writes due to its write-once, append-only log-based architecture for fully durable commits (semi-durable, those which commit to operating system buffers rather than to the stable storage, operate at in-memory speeds). The issue until now was with random reads. Now, even with modest sized caches (512MB), you can experience predictable latency for random out-of-cache reads even for multi-TB databases.

This is a first in the pure-Java world. BDB JE is the only solution when you need large scale, predictable ACID storage for non-relational data. Imagine configuring your heap to 2GB and BDB JE's cache to 512MB then accessing TBs of data on disk knowing that your application will have 1.5GB of memory in the JVM to use.

Memory management and GC have always been tricky to get right when building large scale Java systems. With this release of Berkeley DB Java Edition we help take you one step closer to a predictable database in pure-Java.

Read more on Charlie Lamb's blog.

Friday Oct 15, 2010

Open SQL Camp, Boston

[Read More]

Friday Sep 17, 2010

Berkeley DB 11gR2 (11.2.5.1.19) Released!

[Read More]
About

Information about Berkeley DB products directly from the people who build them.

Search

Archives
« March 2015
SunMonTueWedThuFriSat
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
    
       
Today