X

TimesTen In-Memory Database
for Extreme Performance

The evolution of DB architectures in the quest for scalability

Doug Hood
TimesTen Cloud Product Manager

Summary

This article looks at how database architectures have evolved over time to address ever increasing scalability challenges.  Only the major architectural changes are covered and usually only a few examples for each phase are covered so your favorite technology is likely missing. A taxonomy of DB architectures is provided and the question of what the next evolution will be is raised. The emphasis is on the 'commercially successful' systems and systems used by the author.

 

Monolithic / Mainframe databases

In the beginning, the application was the database. These business applications were often written in COBOL (1959), FORTRAN (1957) or assembly language. The application was monolithic in that it handled the user interface, application logic and basic CRUD (Create, Read, Update and Delete) operations. The CRUD operations used records and files. Some examples of this are COBOL with sequential, indexed sequential and direct access files from the 1960s.

IBM Mainframe with COBOL and file access methods

In time, this evolved to the separation of applications from the CRUD operations.  The 'database' handled CRUD operations, persistence, navigation and concurrency.  There was usually a one-to-one relationship between navigating the database structures and how those data structures were stored on disk. An example of this is the hierarchical DBMS, IBM Information Management System (IMS) from 1968.

IBM IMS on MVS

The one-to-one relationship between a file representation and its navigation usually meant that either the system could not be changed if the business needs changed, or that both the file representation and all of the associated application logic to traverse that file structure also needed to change.  Neither of these options were good. 

Eventually this problem was solved with the introduction of the relational model and the first commercial relational databases (Oracle 1979, IBM SQL/DS 1981 and DB2/MVS 1983). Relational Database Management Systems (RDBMS) like DB2 enabled developers to declaratively define what data they wanted without needing to know how that data is stored or accessed. This declarative approach using SQL led to great gains in developer productivity and enabled RDBMS based systems to better suit business needs than the older file based systems. Interestingly, recent versions of IMS (2013) now also support SQL.

Most of the current NoSQL databases could be considered to have evolved from the concepts and architecture of IBM IMS.

IBM DB@ on MVS

However running the user interface, application logic and database all on the same/single mainframe was not always the most scalable or cost effective solution.

 

Client Server Databases

Although Mini computers were cheaper than mainframe computers [and could also run RDBMS], it was the introduction of RISC based UNIX computers and Personal Computers than made client server databases practical. Client Server databases generally have a single large machine which runs only the RDBMS server. The client machines run the user application logic, present the user interface on some windowing system [eg MS Windows] and communicate with the RDBMS server over a LAN using a DB client library.

Generic client server DB architecture

This client server architecture was generally a lot cheaper and could also be faster than the monolothic mainframe model (eg COBOL with files, IMS or DB2) where everything (user interface, application logic and DB access) ran on a single mainframe computer.  The client server architecture also enabled the client machines to scale out horizontally. This meant that as long as the single RDBMS server was not the bottleneck, adding more [cheap] client machines enabled more throughput.  Common examples of client server RDBMS from the late 1980s and 1990s were Sybase, Oracle, SQL Server, Informix and DB2 running on platforms such as Sun Solaris, HP-UX, IBM AIX, OS/2 and Microsoft Windows.

The problem with this client server model was that a single RDBMS server could easily become the bottleneck.

 

Scaling up with SMP or ccNUMA databases

Back in the 1990s, multi core CPUs and hyper-threading did not exist.  Instead, large Unix servers either used a single RISC CPU with a higher clock speed, or used multiple CPU sockets (generally at lower clock speeds). It turned out that creating large SMP UNIX servers by adding more CPU sockets to a common system bus was complicated and they generally did not scale well.

Symmetric Multiprocessor Systems for scale up

A more scalable approach was to associate CPU sockets with their own local memory.  This Non Uniform Memory Architecture (NUMA) also needs to deal with assumptions about loads and stores at the instruction level.  Most developers want the same cache behavior that they would see on a single core, single socket machine.  Cache coherent NUMA systems (ccNUMA) provide this desired behavior.

ccNUMA architecture for scale up

Large SMP or ccNUMA machines tend to be very expensive and the individual CPUs may not run that fast.  These SMP and ccNUMA issues meant that RDBMS servers could not continue to scale up as you could only add some many CPUs to SMP or ccNUMA systems. RDBMS systems from Sybase, Oracle, Microsoft and IBM were available on SMP and ccNUMA systems from companies like Sun, HP, IBM and SGI.

As RDBMS could not continue to scale up on large SMP or ccNUMA servers, the alternative was to scale out.

 

Scaling out

Instead of scaling up with large SMP or ccNUMA servers, RDBMS vendors looked at using many small or medium sized servers to scale out. There are two main approaches to scaling out (shared disk or shared nothing):

Shared disk and shared nothing architectures

There are pros and cons of each approach and most vendors/users have strong feeling about why their approach is better.

 

Scaling out with shared disk databases

Some examples of shared disk RDBMS are IBM Parallel Sysplex (1990), IBM IMS (1991), Oracle Real Application Clusters (2001), Sybase Adaptive Server Enterprise (2008) and IBM DB2 PureScale (2009).

IBM Parallel Sysplex and Oracle Real Application Clusters

The Oracle Exadata Database Machine is an extension of Oracle Real Application Clusters which runs on specialized hardware and adds intelligence to the storage layer to enable more parallel processing and minimize the network traffic.

Oracle Exadata Storage Cells architecture

Unlike most other shared disk databases, RAC and Extadata can use the full power of SQL and can usually run packaged applications (like SAP, PeopleSoft, JD Edwards, SalesForce and eBusiness Suite) unchanged.  RAC / Exadata do need to be tuned and run on good hardware to perform.

 

Scaling out with shared nothing

Many data warehouse vendors have successfully used a shared nothing architecture for analytic workloads. For example Teradata (1983), IBM DB2 Parallel Edition (1994), Informix Extended Parallel Server (2002), Netezza (2003) and GreenPlum (2005).

Terradata architecture

OLTP workloads can also sometimes be used with shared nothing RDBMS. For example Tandem NonStop SQL (1984) and MySQL NDB Cluster (2003).

MySQL NDB Cluster

Key Value and NoSQL databases

Another approach to database scaling using a shared nothing architecture is to eliminate the SQL RDBMS. Relational databases have tables and usually applications need to join tables to get the data that they need. Table joins which involve rows on different machines generally require network messaging to get the data in one place.  These network messages can really slow down a shared nothing database.  If the logical tables were de-normalized into one big wide table then no joins would be required. De-normalized tables tend to require much more space than normalized tables. So instead of using tables, these Key/Value Databases use keys which point to values. Another name for these Key/Value databases are NoSQL databases.

Some examples of NoSQL databases are Cassandra (2008), Couchbase (2012), Oracle NoSQL (2012) and MongoDB (2013).

MongoDB and Couchbase sharded architectures

A challenge for the NoSQL vendors was that most developers know and like SQL. So these NoSQL databases needed to add some SQL functionality to keep their customers happy and productive. Some examples of these [limited] SQL interfaces are CQL for Cassandra, Oracle Big Data SQL for NoSQL and N1QL for Couchbase.

 

Sharded RDBMS

Another scale out approach is to 'shard' existing RDBMS. DB sharding usually requires the SQL applications to be written to deal with the fact that there are many independent databases rather than a single database. SQL operations that can be processed on a single shard will tend to be fast and scalable. SQL operations that need to span multiple shards are either dis-allowed or require custom code to cope with the multiple databases. For sharded databases to scale and be effective, the data needs to be evenly distributed across the different shards (databases). Often this data distribution is a manual operation.

Database sharding

Customers with large development groups have created shared RDBMS from databases like Oracle, SQL Server, MySQL, PostgreSQL and TimesTen.

 

Shared Nothing In Memory SQL RDBMS

The combination of Cassandra's ability to scale and the challenges of sharded RDBMS prompted SQL RDBMS vendors to revisit shared nothing architectures. VoltDB (2010), memSQL (2013), NuoDB (2013) and Oracle TimesTen Scaleout (2018) are examples of In-Memory, shared nothing SQL RDBMS.

Oracle TimesTen Velocity Scale In-Memory Database

The idea of these shared nothing, In-Memory, SQL RDBMS is to benefit from the horizontal scaling of a shared nothing architecture while keeping the benefits of the SQL language.  As SQL is a declarative language, the data structures which enable the data access can continue to be optimized without needing to change the application code. Having the appropriate data distribution mechanisms for the access patterns is key to scalability for a shared nothing architectures.

Elastic scalability with automatic distribution of data and allowing developers to write SQL code that appears to be for a single DB was the the next stage in the evolution from sharded RDBMS.  If these In-Memory SQL RDBMS can provide the same latency and scalability as the NoSQL and sharded RDBMS, does this mean that the NoSQL and sharded RDBMS will die off?

 

Database Architecture Taxonomy

Based on the information covered in this article, the following chart attempts to provide a taxonomy of DB architectures as they have evolved over time. 

DB Scalability Taxonomy

It will be interesting to see which database architectures are most effective for database scaling over the next 10 years as this field is rapidly evolving. Modern multi core CPUs do not change the scale up vs scale out equation as scaling out will always tend to provide more scalability than scaling up.

 

Disclaimer: these are my personal thoughts and do not represent Oracle's official viewpoint in any way, shape, or form.

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.Captcha