Tuesday Mar 29, 2011

Ubiquitous Mobile Applications


It goes without saying that smart phones are already very popular and enjoying rapid growth. But a few things finally happened in the last year that people have been predicting for a long time, things that could make smartphones and mobile apps an even bigger part of our lives.
First off, in what many consider to be a bold move from a normally conservative company, GM has already released mobile apps that allow many of their cars to be remotely monitored, unlocked, and even started. So as long as you have a high degree of confidence in your smartphone's battery life, you don't need your car keys anymore.

Next, we have Starbucks, who have quietly sidestepped the ongoing mobile payments dispute by launching a barcode based payments app that doesn't require any special technology. Other large retailers are probably watching this closely, and if the big cell phone companies don't get their acts together, they might just end up missing the boat again. In any case, many industry observers feel that the Starbucks system could be the spark that sets off an explosion of mobile payment systems, from McDonald's to Neiman Marcus, and everywhere in between. Here is some more detail on the Starbucks solution, and for an industry analysis click here.

But I saved the biggest one for last. Are you ready? Take a deep breath, and then read this: Smartphones outsold PCs for the first time, Q4 2010. This milestone happened much sooner than anyone expected, thanks mostly to new Android activation numbers so high they're almost hard to believe. At the start of 2011 people were talking about activations in excess of 125k units per day. Currently the number being tossed around is 300k units per day. If Android can keep it up, that platform alone will outsell PCs in 2011. Folks, this could be the end of an era.

When we combine these milestones, the smartphone appears to poised to become a sort of ultimate swiss army knife; a single device that does everything. The thought of it is exciting; imagine the convenience! Alas, as with all great things in life, there is a downside. Money, privacy, communication, and even physical security could be compromised if the software or data on the device is not secure.

The average person has at least 3 things with them when they leave the house: wallet, keys, and cell phone. Today, if one of the three things were to be lost, that would be bad. It would be a huge inconvenience, in the best case. However, when a cell phone is lost today, the unfortunate person is not normally forced to cancel their credit cards and reprogram their car locks. Worse than that, if we envision the future that the stories listed above point to, there is a potential for any or all of the three to be stolen electronically, by a thief far away, even in a different country. That should be a sobering prospect, both for consumers and the various companies providing services to them. It is easy to tout the benefits of technology convergence; it's a fun and exciting topic. But as technology providers, it is our duty to also consider the potential drawbacks.

In order to keep everyone safe, sensitive mobile data needs to be securely stored on the local device, and transmitted reliably to each company's data vaults. Those same companies need to be able to determine, with 100% accuracy, whether a certain request is coming from an authorized device or an imposter. This could be a mobile purchase, a request to unlock your car, or even your home.

Two critical components of a safe, trustworthy solution for storing and syncing sensitive mobile data are Oracle's Berkeley DB and Database Mobile Server. Most of the world's large enterprises already store their critical data in Oracle Database. Berkeley DB is a stable, mature product with over 10 years proven history. It is ideally positioned to be the mobile data store to complement Oracle Database. It is secure in the traditional sense, meaning it can protect mobile data from malicious intent, and in the database architecture sense as well, with full transactional guarantees. Database Mobile Server provides the final piece to form a complete solution: the capability to synchronize your data between the mobile devices and your Oracle backend, and manage the mobile application, data, and even the device itself if desired.
If you're considering a mobile application that will access sensitive data, data that you already trust Oracle to store in your backend infrastructure, Berkeley DB and Database Mobile Server are the best choices to handle it.

Sunday Jan 30, 2011

Is Berkeley DB a NoSQL solution?

Berkeley DB is a library. To use it to store data you must link the library into your application. You can use most programming languages to access the API, the calls across these APIs generally mimic the Berkeley DB C-API which makes perfect sense because Berkeley DB is written in C. The inspiration for Berkeley DB was the DBM library, a part of the earliest versions of UNIX written by AT&T's Ken Thompson in 1979. DBM was a simple key/value hashtable-based storage library. In the early 1990s as BSD UNIX was transitioning from version 4.3 to 4.4 and retrofitting commercial code owned by AT&T with unencumbered code, it was the future founders of Sleepycat Software who wrote libdb (aka Berkeley DB) as the replacement for DBM. The problem it addressed was fast, reliable local key/value storage.

At that time databases almost always lived on a single node, even the most sophisticated databases only had simple fail-over two node solutions. If you had a lot of data to store you would choose between the few commercial RDBMS solutions or to write your own custom solution. Berkeley DB took the headache out of the custom approach. These basic market forces inspired other DBM implementations. There was the "New DBM" (ndbm) and the "GNU DBM" (GDBM) and a few others, but the theme was the same. Even today TokyoCabinet calls itself "a modern implementation of DBM" mimicking, and improving on, something first created over thirty years ago. In the mid-1990s, DBM was the name for what you needed if you were looking for fast, reliable local storage.

Fast forward to today. What's changed? Systems are connected over fast, very reliable networks. Disks are cheap, fast, and capable of storing huge amounts of data. CPUs continued to follow Moore's Law, processing power that filled a room in 1990 now fits in your pocket. PCs, servers, and other computers proliferated both in business and the personal markets. In addition to the new hardware entire markets, social systems, and new modes of interpersonal communication moved onto the web and started evolving rapidly. These changes cause a massive explosion of data and a need to analyze and understand that data. Taken together this resulted in an entirely different landscape for database storage, new solutions were needed.

A number of novel solutions stepped up and eventually a category called NoSQL emerged. The new market forces inspired the CAP theorem and the heated debate of BASE vs. ACID. But in essence this was simply the market looking at what to trade off to meet these new demands. These new database systems shared many qualities. They were designed to address massive amounts of data, millions of requests per second, and scale out across multiple systems.

The first large-scale and successful solution was Dynamo, Amazon's distributed key/value database. Dynamo essentially took the next logical step and added a twist. Dynamo was to be the database of record, it would be distributed, data would be partitioned across many nodes, and it would tolerate failure by avoiding single points of failure. Amazon did this because they recognized that the majority of the dynamic content they provided to customers visiting their web store front didn't require the services of an RDBMS. The queries were simple, key/value look-ups or simple range queries with only a few queries that required more complex joins. They set about to use relational technology only in places where it was the best solution for the task, places like accounting and order fulfillment, but not in the myriad of other situations.

The success of Dynamo, and it's design, inspired the next generation of Non-SQL, distributed database solutions including Cassandra, Riak and Voldemort. The problem their designers set out to solve was, "reliability at massive scale" so the first focal point was distributed database algorithms. Underneath Dynamo there is a local transactional database; either Berkeley DB, Berkeley DB Java Edition, MySQL or an in-memory key/value data structure. Dynamo was an evolution of local key/value storage onto networks. Cassandra, Riak, and Voldemort all faced similar design decisions and one, Voldemort, choose Berkeley DB Java Edition for it's node-local storage. Riak at first was entirely in-memory, but has recently added write-once, append-only log-based on-disk storage similar type of storage as Berkeley DB except that it is based on a hash table which must reside entirely in-memory rather than a btree which can live in-memory or on disk.

Berkeley DB evolved too, we added high availability (HA) and a replication manager that makes it easy to setup replica groups. Berkeley DB's replication doesn't partition the data, every node keeps an entire copy of the database. For consistency, there is a single node where writes are committed first - a master - then those changes are delivered to the replica nodes as log records. Applications can choose to wait until all nodes are consistent, or fire and forget allowing Berkeley DB to eventually become consistent. Berkeley DB's HA scales-out quite well for read-intensive applications and also effectively eliminates the central point of failure by allowing replica nodes to be elected (using a PAXOS algorithm) to mastership if the master should fail. This implementation covers a wide variety of use cases. MemcacheDB is a server that implements the Memcache network protocol but uses Berkeley DB for storage and HA to replicate the cache state across all the nodes in the cache group. Google Accounts, the user authentication layer for all Google properties, was until recently running Berkeley DB HA. That scaled to a globally distributed system. That said, most NoSQL solutions try to partition (shard) data across nodes in the replication group and some allow writes as well as reads at any node, Berkeley DB HA does not.

So, is Berkeley DB a "NoSQL" solution? Not really, but it certainly is a component of many of the existing NoSQL solutions out there. Forgetting all the noise about how NoSQL solutions are complex distributed databases when you boil them down to a single node you still have to store the data to some form of stable local storage. DBMs solved that problem a long time ago. NoSQL has more to do with the layers on top of the DBM; the distributed, sometimes-consistent, partitioned, scale-out storage that manage key/value or document sets and generally have some form of simple HTTP/REST-style network API. Does Berkeley DB do that? Not really.

Is Berkeley DB a "NoSQL" solution today? Nope, but it's the most robust solution on which to build such a system. Re-inventing the node-local data storage isn't easy. A lot of people are starting to come to appreciate the sophisticated features found in Berkeley DB, even mimic them in some cases. Could Berkeley DB grow into a NoSQL solution? Absolutely. Our key/value API could be extended over the net using any of a number of existing network protocols such as memcache or HTTP/REST. We could adapt our node-local data partitioning out over replicated nodes. We even have a nice query language and cost-based query optimizer in our BDB XML product that we could reuse were we to build out a document-based NoSQL-style product. XML and JSON are not so different that we couldn't adapt one to work with the other interchangeably. Without too much effort we could add what's missing, we could jump into this No SQL market withing a single product development cycle.

Why isn't Berkeley DB already a NoSQL solution? Why aren't we working on it? Why indeed...

Tuesday Nov 02, 2010

Berkeley DB Java Edition 4.1.6

Yesterday we released a new version of Berkeley DB Java Edition. This new release has some major enhancements for speed. BDB JE has always been as fast as the I/O + stable storage (disk) system for writes due to its write-once, append-only log-based architecture for fully durable commits (semi-durable, those which commit to operating system buffers rather than to the stable storage, operate at in-memory speeds). The issue until now was with random reads. Now, even with modest sized caches (512MB), you can experience predictable latency for random out-of-cache reads even for multi-TB databases.

This is a first in the pure-Java world. BDB JE is the only solution when you need large scale, predictable ACID storage for non-relational data. Imagine configuring your heap to 2GB and BDB JE's cache to 512MB then accessing TBs of data on disk knowing that your application will have 1.5GB of memory in the JVM to use.

Memory management and GC have always been tricky to get right when building large scale Java systems. With this release of Berkeley DB Java Edition we help take you one step closer to a predictable database in pure-Java.

Read more on Charlie Lamb's blog.

Friday Oct 15, 2010

Open SQL Camp, Boston

[Read More]
About

Information about Berkeley DB products directly from the people who build them.

Search

Archives
« May 2016
SunMonTueWedThuFriSat
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
    
       
Today