Monday Jul 14, 2014

Berkeley DB 12cR1 (12.1.6.1) released

New Releases of Oracle Berkeley DB

Available For Download Now
Subject:
Release Update


The newest release of Berkeley DB 12cR1 (12.1.6.1) is available now. Here is a summary of the new features:

      upgrades to in-memory OLTP throughput & performance
new
HA improvement to identify a single master in a 2 site replication group
new
HA useability improvements
new Blob support added into replication
removed need for fail check monitor process
reduced the time for a database backup
and a lot more!

Berkeley DB continues to enable the most powerful embedded database solutions

      Handle TBs of data with a 1MB library
Flexible, lightweight storage engine, small footprint
Runs on low power ARM devices to cluster of high-end servers
Over 50 open source software projects embed BDB -- check them out on Wikipedia
Completely customizable, choose from 5 different access methods
Industrial quality and battle tested with over 200 million deployments

BDB is hands-down the best edge, mobile, and embedded database available to developers.  With the flexibility to place log files and/or database in any directory, applications can easily take advantage of the IO performance of flash caches, flash disks or SSDs.


Top notch performance
:

  • Berkeley DB performs over 5 million operations a second on a 100GB database running on a 1/8 rack Exadata V2 Database Machine configured with 256GB RAM and 12 cores.
  • Berkeley DB can insert 100 thousand records in 72 milliseconds and read those records in 30 milliseconds, running on a 8 core XEON-based commodity server.  The records contain a 4 byte key and a 48 byte data value.  This was run using the benchmark described on pages 34-36 in an ebook on SQL Server 2014 from Microsoft Press.
  • To put this into perspective, we compared Berkeley DB to SQL Server 2014's In-Memory OLTP feature (code name Hekaton)  which has similar technology.  Berkeley DB, an open source product, is about 20% faster than SQL Server 2014 which takes 94 milliseconds for the same 100k insert operations on an 8 core X64 Intel commodity box with 14 GB memory.

    We are making available a benchmark program in C that can be configured to validate Berkeley DB throughput for the 100K insert test here.


What folks are saying:

Open source Fedora package maintainer, Lubomir Rintel, says "Berkeley DB has quietly served behind the scenes as the database for the RPM Package Manager.   It has proven itself time and time again as a robust and efficient storage engine.   It stores the meta information of the installed rpms.  Under heavy workloads, BDB proves itself reliable. Countless people that use popular Linux distributions have used BDB through RPM and never knew it.  With this new release,   BDB continues its tradition of being a solid storage engine"

Oracle Tape Product Manager, Dan Deppen, says "Berkeley DB is integral to Oracle StorageTek Storage Archive Manager (SAM-QFS).  We have been embedding Berkeley DB in our product for over a decade and it is vital to our disk archiving feature which is used to send files to remote data centers to enable disaster recovery.  Performance and scalability are critical because SAM-QFS supports some of the largest archive customers in the world.   HPC sites, research centers, national libraries and other customers requiring massive scalability and high reliability depend on SAM-QFS and Berkeley DB to maintain availability of their critical data."

  Oracle Identity Management Vice President, Shirish Puranik, says "Berkeley DB is a critical component of Oracle Unified Directory (OUD) and Oracle Directory Server Enterprise Edition (ODSEE).  We have been using Berkeley DB in these products as a high performance, transaction embedded database repository for several years.  Berkeley DB has exceed our expectations for performance and stability. Our Berkeley DB based products are widely deployed in production at largest telcos and financial institutions all over the world.   The improvements for BLOB support and high availability in the 6.1 release are welcome."



Software Downloads

Questions?

Please direct product questions to our Product Help Mail list for Berkeley DB. You can also email the Berkeley DB Product Management team directly.

___________________________________________________
Berkeley DB Product Management
Internal -
https://stbeehive.oracle.com/teamcollab/wiki/Berkeley+DB+Inside+Scoop
OTN - 
http://www.oracle.com/technetwork/database/berkeleydb/overview/index.html

Thursday Jun 12, 2014

Data management in unexpected places

Data management in unexpected places

When you think of network switches, routers, firewall appliances, etc., it may not be obvious that at the heart of these kinds of solutions is an engine that can manage huge amounts of data at very high throughput with low latencies and high availability.

Consider a network router that is processing tens (or hundreds) of thousands of network packets per second. So what really happens inside a router? Packets are streaming in at the rate of tens of thousands per second. Each packet has multiple attributes, for example, a destination, associated SLAs etc. For each packet, the router has to determine the address of the next “hop” to the destination; it has to determine how to prioritize this packet. If it’s a high priority packet, then it has to be sent on its way before lower priority packets. As a consequence of prioritizing high priority packets, lower priority data packets may need to be temporarily stored (held back), but addressed fairly. If there are security or privacy requirements associated with the data packet, those have to be enforced. You probably need to keep track of statistics related to the packets processed (someone’s sure to ask). You have to do all this (and more) while preserving high availability i.e. if one of the processors in the router goes down, you have to have a way to continue processing without interruption (the customer won’t be happy with a “choppy” VoIP conversation, right?). And all this has to be achieved without ANY intervention from a human operator – the router is most likely to be in a remote location – it must JUST CONTINUE TO WORK CORRECTLY, even when bad things happen.

How is this implemented? As soon as a packet arrives, it is interpreted by the receiving software. The software decodes the packet headers in order to determine the destination, kind of packet (e.g. voice vs. data), SLAs associated with the “owner” of the packet etc. It looks up the internal database of “rules” of how to process this packet and handles the packet accordingly. The software might choose to hold on to the packet safely for some period of time, if it’s a low priority packet.

Ah – this sounds very much like a database problem. For each packet, you have to minimally

· Look up the most efficient next “hop” towards the destination. The “most efficient” next hop can change, depending on latency, availability etc.

· Look up the SLA and determine the priority of this packet (e.g. voice calls get priority over data ftp)

· Look up security information associated with this data packet. It may be necessary to retrieve the context for this network packet since a network packet is a small “slice” of a session. The context for the “header” packet needs to be stored in the router, in order to make this work.

· If the priority of the packet is low, then “store” the packet temporarily in the router until it is time to forward the packet to the next hop.

· Update various statistics about the packet.

In most cases, you have to do all this in the context of a single transaction. For example, you want to look up the forwarding address and perform the “send” in a single transaction so that the forwarding address doesn’t change while you’re sending the packet. So, how do you do all this?

Berkeley DB is a proven, reliable, high performance, highly available embeddable database, designed for exactly these kinds of usage scenarios. Berkeley DB is a robust, reliable, proven solution that is currently being used in these scenarios.

First and foremost, Berkeley DB (or BDB for short) is very very fast. It can process tens or hundreds of thousands of transactions per second. It can be used as a pure in-memory database, or as a disk-persistent database. BDB provides high availability – if one board in the router fails, the system can automatically failover to another board – no manual intervention required. BDB is self-administering – there’s no need for manual intervention in order to maintain a BDB application. No need to send a technician to a remote site in the middle of nowhere on a freezing winter day to perform maintenance operations.

BDB is used in over 200 million deployments worldwide for the past two decades for mission-critical applications such as the one described here. You have a choice of spending valuable resources to implement similar functionality, or, you could simply embed BDB in your application and off you go! I know what I’d do – choose BDB, so I can focus on my business problem. What will you do?

About

Information about Berkeley DB products directly from the people who build them.

Search

Archives
« September 2015
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today