Tuesday Apr 21, 2009

MySQL 5.4 Scaling on Nehalem with Sysbench

As a final followup to my MySQL 5.4 Scalability on 64-way CMT Servers blog, I'm posting MySQL 5.4 Sysbench results on a Sun Fire X4270 platform using the Intel x86 Nehalem chip (2 sockets/8 cores/16 threads). All CPUs were turned on during the runs. The my.cnf was the same as described in the previous blog.

The Sysbench version used was 0.4.12, and the read-only runs were invoked with the following command:

sysbench --max-time=300 --max-requests=0 --test=oltp --oltp-dist-type=special --oltp-table-size=10000000 \\
   --oltp-read-only=on --num-threads=[NO_THREADS] run

The "oltp-read-only=on" parameter was omitted for the read-write tests. The my.cnf file listed in my previous blog was also used unchanged for these tests.

Here are the results graphically running on Linux.

The read-only results for MySQL 5.4 show a higher peak throughput, but a tail off beyond 32 threads compared to MySQL 5.1. The drop off in throughput is clearly an issue that needs to be tracked down and resolved. By contrast, the read-write results show better throughput across the board for MySQL 5.4 compared to MySQL 5.1 and significantly better scalability.

Here are the MySQL 5.4 results graphically on Solaris (Sysbench 0.4.8). Note that in this case unused CPUs were turned off. The number of vCPUs is the same as the number of active threads up to 16. Active threads beyond 16 are using only 16 vCPUs.

Solaris shows better throughput than Linux for the read-only tests and good scalability across the range of active threads.

The Sun Fire X4270 platform with its Nehalem CPUs offers a clear demonstration of the improved scalability of MySQL 5.4. The results suggest that users can expect to be able to effectively use available CPU resources when running MySQL 5.4 instances on this platform.


MySQL 5.4 Sysbench Scalability on 64-way CMT Servers

As a followup to my MySQL 5.4 Scalability on 64-way CMT Servers blog, I'm posting MySQL 5.4 Sysbench results on the same platform. The tests were carried out using the same basic approach (i.e. turning off entire cores at a time) - see my previous blog for more details.

The Sysbench version used was 0.4.8, and the read-only runs were invoked with the following command:

sysbench --max-time=300 --max-requests=0 --test=oltp --oltp-dist-type=special --oltp-table-size=10000000 \\
   --oltp-read-only=on --num-threads=[NO_THREADS] run
The "oltp-read-only=on" parameter was omitted for the read-write tests. The my.cnf file listed in my previous blog was also used unchanged for these tests.

Here is the data presented graphically. Note that the number of vCPUs is the same as the number of active threads up to 64. Active threads beyond 64 are using only 64 vCPUs.

And here is some raw data:


User ThreadsTxns/secUserSystemIdle


User ThreadsTxns/secUserSystemIdle

A few observations:

  • Throughput scales 63% from 32-way to 64-way for read-only and 67% for read-write. Not quite as good as for the OLTP test reported in my earlier blog, but not at all bad.
  • Beyond 48 vCPUs, idle CPU is preventing optimal scaling for the read-only test.
  • There's quite a bit of CPU left on the table for the read-write tests.
There's still a lot more work to be done, but we're definitely seeing progress.


MySQL 5.4 Scalability on 64-way CMT Servers

Today Sun Microsystems announced MySQL 5.4, a release that focuses on performance and scalability. For a long time it's been possible to escape the confines of a single system with MySQL, thanks to scale-out technologies like replication and sharding. But it ought to be possible to scale-up efficiently as well - to fully utilize the CPU resource on a server with a single instance.

MySQL 5.4 takes a stride in that direction. It features a number of performance and scalability fixes, including the justifiably-famous Google SMP patch along with a range of other fixes. And there's plenty more to come in future releases. For specifics about the MySQL 5.4 fixes, check out Mikael Ronstrom's blog.

So how well does MySQL 5.4 scale? To help answer the question I'm going to take a look at some performance data from one of Sun's CMT systems based on the UltraSPARC T2 chip. This chip is an interesting test case for scalability because it amounts to a large SMP server in a single piece of silicon. The UltraSPARC T2 chip features 8 cores, each with 2 integer pipelines, one floating point unit, and 8 hardware threads. So it presents a total of 64 CPUs (hardware threads) to the Operating System. Since there are only 8 cores and 16 integer pipelines, you can't actually carry out 64 concurrent operations. But the CMT architecture is designed to hide memory latency - while some threads on a core are blocked waiting for memory, another thread on the same core can be executing - so you expect the chip to achieve total throughput that's better than 8x or even 16x the throughput of a single thread.

The data in question is based on an OLTP workload derived from an industry standard benchmark. The driver ran on the system under test as well as the database, and I used enough memory to adequately cache the active data and enough disks to ensure there were no I/O bottlenecks. I applied the scaling method we've used for many years in studies of this kind. Unused CPUs were turned off at each data point (this is easily achieved on Solaris with the psradm command). That ensured that "idle" CPUs did not actually participate in the benchmark (e.g. the OS assigns device drivers across the full set of CPUs, so they routinely handle interrupts if you leave them enabled). I turned off entire cores at a time, so an 8 "CPU" result, for example, is based on 8 hardware threads in a single core with the other seven cores turned off, and the 32-way result is based on 4 cores each with 8 hardware threads. This approach allowed me to simulate a chip with less than 8 cores. Sun does in fact ship 4- and 6-core UltraSPARC T2 systems; the 32- and 48-way results reported here should correspond with those configurations. For completeness, I've also included results with only one hardware thread turned on.

Since the driver kit for this workload is not open source, I'm conscious that community members will be unable to reproduce these results. With that in mind, I'll separately blog Sysbench results on the same platform. So why not just show the Sysbench results and be done with it? The SQL used in the Sysbench oltp test is fairly simple (e.g. it's all single table queries with no joins); this test rounds out the picture by demonstrating scalability with a more complex and more varied SQL mix.

It's worth noting that any hardware threads that are turned on will get access to the entire L2 cache of the T2. So, as the number of active hardware threads increases, you might expect that contention for the L2 would impact scalability. In practice we haven't found it to be a problem.

Here is the my.cnf:

table_open_cache = 4096
innodb_log_buffer_size = 128M
innodb_log_file_size = 1024M
innodb_log_files_in_group = 3
innodb_buffer_pool_size = 6G
innodb_additional_mem_pool_size = 20M
innodb_thread_concurrency = 0

Enough background - on to the data. Here are the results presented graphically.

And here is some raw data:

MySQL 5.4.0


Abbreviations: "vCPUS" refers to hardware threads, "Us", "Sy", "Id" are CPU utilization for User, System, and Idle respectively, "smtx" refers to mutex spins, "icsw" to involuntary context switches, and "sycl" to systems calls - the data in each of these last three columns is normalized per transaction.

Some interesting takeaways:

  • The throughput increases 71% from 32 to 64 vCPUs. It's encouraging to see a significant increase in throughput beyond 32 vCPUs.
  • The scaleup from 1 to 64 vCPUs is almost 30x. As I noted earlier, the UltraSPARC T2 chip does not have 64 cores - it only has 8 cores and 16 integer pipelines, so this is a good outcome.

To put these results into perspective, there are still many high volume high concurrency environments that will still require replication and sharding to scale acceptably. And while MySQL 5.4 scales better than previous releases, we're not quite seeing scalability equivalent to that of proprietary databases. It goes without saying, of course, that we'll be working hard to further improve scalability and performance in the weeks and months to come.

But the clear message today is that MySQL 5.4 single-server scalability is now good enough for many commercial transaction-based application environments. If your organization hasn't yet taken the plunge with MySQL, now is definitely the time. And you certainly won't find yourself complaining about the price-performance.


Wednesday Nov 05, 2008

MySQL Performance Optimizations

You might be wondering what's been happening with MySQL performance since Sun arrived on the scene. The good news is that we haven't been idle. There's been general recognition that MySQL could benefit from some performance and scalability enhancements, and Sun assembled a cross-organizational team immediately after the acquisition to get started on it. We've enjoyed excellent cooperation between the engineers from both organizations.

This kind of effort is not new for Sun - we've been working with proprietary database companies on performance for years, with source code for each of the major databases on site to help the process. In this case, the fact that the MySQL engineers are working for the same company certainly simplifies a lot of things.

If you'd like to get some insight into what's been happening, a video has just been posted online that features brief interviews with some of the engineers involved in various aspects of the exercise.

There are also screencasts available that offer a brief Introduction to MySQL Deployments, some basic pointers on Tuning and Monitoring MySQL, and a bit of a look at Future Directions for MySQL Performance. Other videos that might be of interest are also available at http://www.sun.com/software/.

The obvious question is when customers are going to see performance improvements in shipping releases. Actually, it's already happened. One example is a bug fix that made it into MySQL 5.1.28 and 6.0.7. The fix makes a noticeable difference to peak throughput and scalability with InnoDB. And that's only the beginning. You can expect to see a lot more in the not-too-distant future. Watch this space!

Wednesday Apr 16, 2008

Dtrace with MySQL 6.0.5 - on a Mac

For the first time, MySQL includes Dtrace probes in the 6.0 release. On platforms that support Dtrace you can still find out a lot about what's happening, both in the Operating System kernel and in user processes, even without probes in the application. But carefully placed Dtrace probes inserted into the application code can give you a lot more information about what's going on, because they can be mapped to the application functionality. So far only a few probes have been included, but expect more to be added soon.

I decided to take the new probes for a spin. Oh, and rather than do it on a Solaris system, I figured I'd give it a shot on my Intel Core 2 Duo MacBook Pro, since MacOS X 10.5 (Leopard) supports Dtrace.

To begin with I pulled down and built MySQL 6.0.5 from Bit Keeper, thanks to some help from Brian Aker. Here's where, and here's how. I needed to make a couple of minor changes to the dtrace commands in the Makefiles to get it to compile - hopefully that will be fixed pretty soon in the source. Before long I had mysqld running. A quick "dtrace -l | grep mysql" confirmed that I did indeed have the Dtrace probes.

There are lots of interesting possibilities, but for now I'll just run a couple of simple scripts to illustrate the basics.

First, I ran this D script:

#!/usr/sbin/dtrace -s

	printf("%d\\n", timestamp);

Running "select count(\*) from latest" from mysql in another window yielded the following:

dtrace: script './all.d' matched 16 probes
CPU     ID                    FUNCTION:NAME
  0  18665 _ZN7handler16ha_external_lockEP3THDi:external_lock 26152931955098

  0  18673 _Z13handle_selectP3THDP6st_lexP13select_resultm:select_start 26152931997414

  0  18665 _ZN7handler16ha_external_lockEP3THDi:external_lock 26153878060162

  0  18672 _Z13handle_selectP3THDP6st_lexP13select_resultm:select_finish 26153878082583

So the count(\*) took out a couple of locks, and we got to see the start and end of the select with a timestamp (in microseconds). Just for interest, I ran the count(\*) a second time, and this time none of the probes fired - the query was being satisfied from the query cache.

Next I decided to try "show indexes from latest;". The result was as follows:

mysql> show indexes from latest;
| Table  | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_Comment |
| latest |          0 | latest_x |            1 | hostid      | A         |      356309 |     NULL | NULL   | YES  | BTREE      |         |               | 
| latest |          0 | latest_x |            2 | exdate      | A         |      356309 |     NULL | NULL   | YES  | BTREE      |         |               | 
| latest |          0 | latest_x |            3 | extime      | A         |      356309 |     NULL | NULL   | YES  | BTREE      |         |               | 
3 rows in set (0.15 sec)

Here's the result from Dtrace:

  0  18673 _Z13handle_selectP3THDP6st_lexP13select_resultm:select_start 27114155991557

  0  18670 _ZN7handler12ha_write_rowEPh:insert_row_start 27114308499688

  0  18669 _ZN7handler12ha_write_rowEPh:insert_row_finish 27114308553968

  0  18670 _ZN7handler12ha_write_rowEPh:insert_row_start 27114308565086

  0  18669 _ZN7handler12ha_write_rowEPh:insert_row_finish 27114308588605

  0  18670 _ZN7handler12ha_write_rowEPh:insert_row_start 27114308598685

  0  18669 _ZN7handler12ha_write_rowEPh:insert_row_finish 27114308622164

  0  18672 _Z13handle_selectP3THDP6st_lexP13select_resultm:select_finish 27114308714705

So three rows were inserted, presumably into a temporary table, corresponding to the three index columns. Dtrace shows that the query cache isn't used when you rerun this particular query.

Next I ran the following D script:

#!/usr/sbin/dtrace -s

	self->ts = timestamp;

	printf("%d\\n", timestamp - self->ts);

and reran the "show indexes" command. Here's the result:

CPU     ID                    FUNCTION:NAME
  1  18669 _ZN7handler12ha_write_rowEPh:insert_row_finish 61634

  1  18669 _ZN7handler12ha_write_rowEPh:insert_row_finish 22040

  1  18669 _ZN7handler12ha_write_rowEPh:insert_row_finish 21058

  1  18672 _Z13handle_selectP3THDP6st_lexP13select_resultm:select_finish 88933

This time we see just one line for each insert and select, including the time taken to complete the operation (in microseconds) rather than a timestamp.

Even in the more compact form, this would get a bit verbose for some operations, though, so I ran the following D script:

#!/usr/sbin/dtrace -s

	self->ts = timestamp;

	@completion_time[probename] = quantize(timestamp - self->ts);

After repeating the "show indexes", Dtrace returned the following after a Control-C:

dtrace: script './all3.d' matched 12 probes

           value  ------------- Distribution ------------- count    
            8192 |                                         0        
           16384 |@@@@@@@@@@@@@@@@@@@@@@@@@@@              2        
           32768 |@@@@@@@@@@@@@                            1        
           65536 |                                         0        

           value  ------------- Distribution ------------- count    
           32768 |                                         0        
           65536 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 1        
          131072 |                                         0        

This actually gives more info, but in a fairly compact form. Thanks to the quantize() function, we see a histogram with a count for each operation alongside the time taken; Two of the inserts completed in roughly 16384 microseconds, and one in 32768 microseconds.

Finally, I ran a "show tables;", which returned 48 rows, and the following from Dtrace:

dtrace: script './all3.d' matched 12 probes

           value  ------------- Distribution ------------- count    
           32768 |                                         0        
           65536 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 1        
          131072 |                                         0        

           value  ------------- Distribution ------------- count    
            4096 |                                         0        
            8192 |@@@@@@@@@@@@                             14       
           16384 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@             33       
           32768 |@                                        1        
           65536 |                                         0        

The report shows the 46 inserts. It would have been a lot more difficult to read with the first couple of D scripts, but the quantize() function offers a nice summary for no effort.

This is barely scratching the surface of what Dtrace can do, of course. But hopefully I've whetted your appetite. Why don't you check it out for yourself?

Monday Feb 25, 2008

Tuning MySQL on Linux

In this blog I'm sharing the results of a series of tests designed to explore the impact of various MySQL and, in particular, InnoDB tunables. Performance engineers from Sun have previously blogged on this subject - the main difference in this case is that these latest tests were based on Linux rather than Solaris.

It's worth noting that MySQL throughput doesn't scale linearly as you add large numbers of CPUs. This hasn't been a big issue to most users, since there are ways of deploying MySQL successfully on systems with only modest CPU counts. Technologies that are readily available and widely deployed include replication, which allows horizontal scale-out using query slaves, and memcached, which is very effective at reducing the load on a MySQL server. That said, scalability is likely to become more important as people increasingly deploy systems with quad-core processors, with the result that even two processor systems will need to scale eight ways to fully utilize the available CPU resources.

The obvious question is whether performance and scalability is going to attract the attention of a joint project involving the performance engineering groups at MySQL and Sun. You bet! Fruitful synergies should be possible as the two companies join forces. And in case you're wondering, Linux will be a major focus, not just Solaris - regard this blog as a small foretaste. Stay tuned in the months to come...

Test Details

On to the numbers. The tests were run on a Sun Fire X4150 server with two quad-core Intel Xeon processors (8 cores in total) and a Sun Fire X4450 server with four quad-core Intel Xeon processors (16 cores in total) running Red Hat Enterprise Linux 5.1. The workload was Sysbench with 10 million rows, representing a database about 2.5Gbytes in size, using the current 64-bit Community version of MySQL, 5.0.51a. My colleague Neel has blogged on the workload and how we used it. The graphs below do not list throughput values, since the goal was only to show relative performance improvements.

The first test varied innodb_thread_concurrency. In MySQL 5.0.7 and earlier, a value greater than 500 was required to allocate an unlimited number of threads. As of MySQL 5.0.18, a value of zero means unlimited threads. In the graph below, a value of zero clearly delivers better throughput beyond 4 threads for the read-only test.

The read-write test, however, benefits from a setting of 8 threads. These graphs show the throughput on the 8-core system, although both the 8- and the 16-core systems showed similar behavior for each of the read-only and the read-write tests.

The following graphs show the effect of increasing the InnoDB buffer cache with the innodb_buffer_cache_size parameter. The first graph shows read-only performance and the second shows read-write performance. As you would expect, throughput increases significantly as the cache increases in size, but eventually reaches a point where no benefit is derived from further increases.

Finally, we've seen that throughput is affected by the amount of memory we assign to the InnoDB buffer cache. But since the default Linux file system, ext3, also caches pages, why not let Linux do the caching rather than InnoDB. To test this, we tried comparing throughput with and without Linux file system caching. Setting the innodb_flush_method parameter to O_DIRECT will cause MySQL to bypass the file system cache. The results are shown in the graph below. Clearly the file system cache makes a difference, because throughput with the InnoDB buffer cache set to 1024 Mbytes supported by the file system cache is also as good as throughput with no file system caching and the InnoDB buffer cache set to 2048 Mbytes. But while the Linux file system cache can help protect you somewhat if you undersize your InnoDB buffer cache, for optimal performance, it's important to give the InnoDB buffer cache as much memory as it needs. Bypassing the Linux file system cache may not be a good idea unless you have properly sized the InnoDB buffer cache - disk read activity was very high when the buffer cache was too small and the file system cache was being bypassed. We also found that the CPU cost per transaction was higher when the InnoDB buffer cache was too small. That's not surprising, since the code path is longer when MySQL has to go outside the buffer cache to retrieve a block.

We tested a number of other parameters but found that none were as significant for this workload.

So to summarize, two key parameters to focus on are innodb_buffer_pool_size and innodb_thread_concurrency. Appropriate settings for these parameters are likely to help you ensure optimal throughput from your MySQL server.


Thursday Jan 17, 2008

MySQL in Safe Hands

Given the timing of my recent blog, Are Proprietary Databases Doomed?, I've been asked if I knew in advance about Sun's recent MySQL acquisition. Not at all! I was just as surprised and delighted as most others in the industry when I saw the news.

In the blog I outlined counter strategies that proprietary database companies might use to respond to the rise of Open Source Databases (OSDBs). One strategy was acqusition and I noted that MySQL, being privately held, was probably the most vulnerable.

The good news is that MySQL is no longer vulnerable. Sun has an unparalleled commitment to open source. No other organization has contributed anything like the quantity and quality of code, with Solaris, Java, OpenOffice, GlassFish, and other software now freely available under open source licenses. Sun also has an established track record with OSDBs such as PostgreSQL, and JavaDB, Sun's distribution of Derby. The MySQL acquisition does not represent a change of direction for Sun, rather the extension of an existing strategy.

The real surprise is that one of Oracle, IBM, or Microsoft didn't get there first. Any of them could have swallowed MySQL without burping. I'm betting there are people today wondering why on earth they let Sun steal a march on them.

Whatever the reason, the final outcome is great news for anyone who appreciates the value of free open source software. MySQL couldn't be in safer hands.


Monday Dec 10, 2007

Are Proprietary Databases Doomed?

Times of change are upon the database market. The major established database companies are being challenged by open source upstarts like MySQL and PostgreSQL. For years, Open Source Databases (OSDBs) have been quietly increasing their penetration, but until recently they have lacked the capabilities to seriously threaten proprietary databases like Oracle, IBM's DB2, and Microsoft's SQL Server.

All that has changed. OSDBs now boast the necessary features and robustness to support commercial databases hundreds of Gigabytes in size. And a growing trickle of competitive benchmark results shows them performing more than acceptably well against their better-established cousins, while offering significant benefits in Total Cost of Ownership (TCO).

What does this mean for proprietary databases? Are they doomed? And more importantly, are there opportunities for end users to benefit from the rise of OSDBs? I will explore these topics in a multi-part blog:

  1. Feature Stagnation In The Traditional Database Market
  2. License Costs: the Soft Underbelly of Proprietary Databases
  3. The Looming Open Source Database Tsunami
  4. The Perfect Storm for Proprietary Databases
  5. Proprietary Counter Strategies
  6. Conclusion
The standard disclaimer applies as always: these are my opinions and not necessarily those of Sun or anyone else.

1. Feature Stagnation In The Traditional Database Market

When I joined Sun in the late 80s, choosing a database was still an important issue for end users. Large customers routinely issued tenders for databases as well as for computer systems, and, to help in the selection process, customers often staged performance bakeoffs between competing database vendors using home-grown benchmarks.

In the 90s, fierce competition led to a rapid explosion in features as well as dramatic improvements in performance. Sun in particular invested a lot of engineering effort in working with the major database companies to improve performance and scalability. At the same time, a variety of new technologies appeared, many claiming they would knock the relational database from its throne. Distributed databases, object relational, shared nothing, and in-memory database implementations all made cameo appearances. Relational databases simply absorbed their best features and continued to rule. Simple database features like triggers and stored procedures gave way to more sophisticated technologies like replication, online backup, and cluster support.

By the turn of the millenium, relational databases had already pretty much met the essential requirements of end users, and proprietary database companies were either pointing their vaccuum cleaners toward other interesting money piles, or losing the plot entirely and sailing off the edge of the world. Today, database releases continue to tout new features, but they're frosting on the cake rather than essentials. No-one issues a tender for a database unless they have unusual requirements. No-one loses their job because they chose the wrong database. And it's been that way for years.

Put very simply, the database has arguably become a commodity.

2. License Costs: the Soft Underbelly of Proprietary Databases

Databases may have become commodities, but selling them is still very profitable. Historically, as CPU performance increased with faster clock speeds, users continued to pay the same price for database licenses on the newer, more powerful systems. But that all changed as the industry moved to multi-core CPUs. As we will see, the licensing policies adopted by proprietary database companies have ensured that license charges have increased steeply as a result of this revolution in processor chip technology.

Some years ago, chip manufacturers began turning to multi-core CPU designs as a way of continuing to drive improvements in CPU performance. As it becomes more difficult to increase the transistor density of CPU chips and increase clock speeds, multi-core chips offer a simple alternative by packing more than one core on a single chip running at a lower clock speed. At the same time, proprietary database vendors began basing license charges on the number of cores in a system.

A practical example is Sun's dual-core UltraSPARC-IV chip. It replaced the single-core UltraSPARC-III chip at the same clock speed. By delivering two cores instead of one, the UltraSPARC-IV offered twice the performance of its predecessor. A typical system was the popular UltraSPARC-IV-based Sun Fire V490 which included four dual-core chips (eight cores). This system replaced the Sun Fire V480 with four single-core UltraSPARC-III chips. The customer received twice the CPU performance for the same hardware price.

Not so for the Oracle database price, though. Based on per-core licensing, the new system was now treated as an 8-core system instead of a 4-core system as previously. And worse, users were forced to a significantly more expensive database edition if they deployed systems with more than four cores. So, compared to the V480, the V490 now attracted a much higher per-core charge, on top of the requirement to pay for twice as many core licenses.

The following table illustrates the extraordinary windfall received by Oracle:

Relative Performance1.02.0
Relative Hardware Price1.01.0
Database Core Licenses Required48
Relative Core License Price1.02.7
Relative Database License Price1.05.3

In the face of considerable pushback from the industry, Oracle responded with "discounts" for the second core in a chip. In the case of the V490, that meant the discounted database price was still four times the license charge for the V480!

So while users continued to enjoy more powerful and more feature-rich hardware at the same or lower price, they were paying a lot more to use the same database software on the new hardware.

The situation prompted comments like those from Stephen Elliot, an enterprise systems analyst at IDC, who anticipated increased pressure on Oracle to be more flexible with pricing, and reported that "Oracle is becoming the lone force on the processor issue". Since that time, under pressure from Microsoft, which introduced per-chip database licensing back in Feb 2005, Oracle has continued to tweak its licensing model, but so far without making wholesale changes.

It should be noted that Oracle is not alone in following this path - similar anomalies apply to the pricing of IBM's DB2 database. Nor is Microsoft the only vendor to embrace a per-chip pricing strategy - as long ago as 2005, the Register reported that BEA was adopting the same per-socket licensing model as Microsoft and VMware, and noted that "the software maker's move puts it in prime fighting position against Oracle and IBM, which have been slow to adjust their pricing models for new chips from AMD, Intel and others." According to Ashlee Vance in July 2007, "Most software vendors have had the decency to settle on a per-socket basis for their pricing schemes, ignoring the number of cores per chip. Meanwhile, IBM and Oracle, the vendors with the most to lose, prefer to keep you in a state of pricing confusion."

Anecdotally, some companies are finding that database licenses have become their single biggest IT cost. The impact is probably greater on small and medium-sized companies that don't have the same ability to command the hefty discounts that larger companies typically enjoy from database vendors. A colleague related a story that illustrates the issue. His brother worked for a 200-person company that decided it was time to upgrade their database applications. They set out to deploy a well-known proprietary database until they discovered that the database license fee was going to exceed their entire current annual IT budget! They ended up deploying an open source database instead.

3. The Looming Open Source Database Tsunami

In August 2007, Tom Daly revealed the results of a SPECjAppServer2004 benchmark based on an entirely open source software stack, with PostgreSQL running on a Sun Fire T2000 server and the Glassfish Application Server on two Sun Fire X4200 servers. The announcement was revealing for two reasons:
  • It showed PostgreSQL capable of supporting more than 6,000 concurrent users on a commodity hardware platform
  • The benchmark result was within 10% of a published result from an HP/Oracle configuration costing more than three times as much. The major reason for the huge price difference was the cost of Oracle (at $110,000 compared to PostgreSQL $0).

    For the record, it should be noted that the SPECjAppServer2004 benchmark does not include a pricing metric, so these are not official prices. Nonetheless, since benchmark configurations clearly cost actual money, it seems reasonable to assess the prices involved. In this case, all hardware and software prices were drawn from publicly-available sources.

Open source databases still do not scale as well as proprietary databases, but they now perform well enough to manage a broad range of challenging applications. End users who previously saw OSDBs as primarily suitable for simple low-volume applications are now able to reasonably consider them for departmental database server deployments.

Do OSDBs have the features needed for serious deployment, though? 12 months ago, Forrester Research released a report suggesting that eighty percent of applications typically only use 30 per cent of the features found in commercial databases, and that the open source databases deliver those features today. While Forrester noted that OSDBs still lag for mission critical applications, those holes are likely to be plugged as bigger players announce 24x7 technical support and service (as has already happened, for example, with PostgreSQL).

A recent survey of Oracle users showed 20% having open source databases larger than 50 Gbytes and two thirds citing cost as the driver to adoption of open source. Open source database adoption is still relatively small. Does that mean OSDBs should be dismissed? Not according to a Gartner analyst, quoted last year as saying "We think it is a big deal. Granted, in the DBS market right now, they are very small players. Remember about 10 years ago, Linux in the market was a very small player? Not so much, anymore."

The comparison may be apt. With the combination of essential features, improved performance, robust support, and compelling price, OSDBs today bear a striking resemblence to Linux a few years ago. Many believe that the wave looming on the horizon is a tsunami. Time alone will tell.

4. The Perfect Storm for Proprietary Databases

Underneath major end-user applications like ERP and data warehouse software, every major hardware and software component is now subject to commodity pricing. As we have seen, each year hardware prices continue to decline while processing power increases. At the same time, leading operating systems like Sun's Solaris and Linux are now open source and can be deployed for free. The same is true of most other components of the software stack, including virtualization software, databases, application servers, web servers, and other middleware. Even Sun's UltraSPARC T1 and T2 chips and RTL have been open sourced, allowing community members to build on proven hardware at a much lower cost.

Why, then, is proprietary database software becoming more expensive while everything else reduces in price? End users normally expect to benefit from the cost savings resulting from improvements in technology. I am writing this blog, for example, on an affordable computer that would easily outperform expensive commercial systems from just 10 years ago.

It seems difficult to resist the conclusion that proprietary database companies have managed to redirect a good chunk of these savings away from end users and into their own coffers. Successful as this strategy has been, though, it could ultimately backfire. The more expensive proprietary databases become, the more attractive lower cost alternatives appear.

A number of forces are currently at work in the market:

  • The momentum around open source software has continued to build, and many open source products, while not as capable as proprietary alternatives, have become "good enough" to replace them. This is also true of open source databases.
  • Large technology suppliers are beginning to bundle OSDBs, with the result that customers are able to take out support contracts with established companies as well as startups. Sun, for example, ships and supports PostgreSQL.
  • Benchmarks are beginning to feature OSDBs. Thus OSDBs are starting down a path that has been trod by proprietary databases over many years. Benchmarks can be expected to highlight the capabilities of OSDBs, accelerate the process of OSDB performance improvement, and, increasingly, expose the price difference between OSDBs and proprietary databases. And as the scalability of OSDBs increases, benchmarks will be published on larger systems, opening up an even wider gap in database pricing.
  • The sweet spot in the hardware market is a two- to four-chip server. With the advent of quad-core chips from Intel and AMD, and the 8-core UltraSPARC T1 and T2 chips released by Sun, such systems have become powerful enough to carry out processing that required much larger and more expensive systems in the past. The pricing chasm between low cost hardware and high cost proprietary databases continues to widen.

    For example, the UltraSPARC T1-based servers, the Sun Fire T1000 and T2000, shipped with 8 cores on a single chip. The recently-released single-chip, 8-core UltraSPARC T2 servers, which deliver twice the performance at roughly the same cost, now attract a DB2 license charge that has increased 66% compared to the T1 platforms. So IBM has taken the opportunity to significantly hike the price of the database software on that platform, even though the number of chips and number of cores has not changed.

    At the same time, 8-core UltraSPARC T1 servers attracted a 2-core license charge from Oracle (a very reasonable pricing decision on Oracle's part - see this table which is referenced from here). At the time of writing, Oracle has not announced a final decision on pricing for the T2 platform, but it is unlikely that Oracle will be able to resist the temptation to emulate or even outdo the cynicism of IBM. Why such a pessimistic expectation? Because the same table referred to above announces that the 1.4GHz UltraSPARC version of the older T1 servers will be subject to a 0.5 multiplier instead of the 0.25 multiplier that applies to the 1.0 GHz and 1.2 GHz versions of the same platform. So a simple 17% clock speed increase in the hardware, with no other changes at all, prompted Oracle to double the license charge!

    To be fair to Oracle, since the UltraSPARC T1 and T2 platforms are single chip systems, customers can purchase Standard Edition and Standard Edition One licenses for them at greatly reduced prices. But these editions do not offer the full Oracle feature set, and in particular they do not offer the parallel processing capabilities essential for efficient processing on the 32-way T1 and 64-way T2 platforms; if customers want parallel capabilities they must purchase the vastly more expensive Enterprise Edition with its core-based licensing and inexplicable and inconsistent "discounts".

  • Web 2.0 is gathering momentum, and the new breed of companies leading the charge are largely ignoring proprietary databases for their deployments. Probably much of the scepticism relates to the cost implications.

In the past there has been no real alternative to proprietary databases. That has changed, at least for new applications that have no legacy database dependencies. The relative proportion of hardware and software purchase prices has been changing, too, and for some years software has gradually been consuming ever-larger slices of the pie. In recent times the pace has accelerated, though, as commodity-priced hardware has become much more powerful and database prices have increased.

For the most part, customers still have not entirely woken up to these trends. But if they ever do, we may see a significant shift in the database market. Is it possible that the perfect storm for proprietary databases is brewing?

5. Proprietary Counter Strategies

Proprietary Database companies are not without ways of responding to the challenge from OSDBs. Here are a few possibilities.
  • Strategy 1: Resistance is Futile - You Will Be Assimilated. Picking off your competitors can get a lot easier when they are open source companies, because most of them struggle to address a major discrepancy between their penetration and their annual revenue. Putting it another way, they have plenty of users but very little revenue to show for it. Hey, their product is free, after all! So far, not many people have figured out how to become absurdly rich by giving away software.

    Buying a competitor gives you access to their Intellectual Property (IP) and their customer base. Sometimes it simply eliminates a competitor from the marketplace. Either way, playing the Borg can be an effective way of reshaping the market in your favor.

    Note that Oracle has already made some raids across the border, having acquired InnoBase, maker of InnoDB, MySQL's most popular transactional engine, and Sleepycat Software, maker of Berkeley DB, another transactional engine used with MySQL. In response, MySQL has scrambled to introduce Falcon, a transactional database engine of its own.

    Any of the major proprietary database companies could reasonably play the role of the Borg in this scenario, though, since all of them have very deep pockets. MySQL is probably the most vulnerable to takeover, since it's privately held. PostgreSQL may be more difficult to silence, since it is developed by an active community rather than a single company. But in either case, even if you pick off the company or the key community contributors, you haven't removed the IP from the market because the database is open source.

  • Strategy 2: Bait And Switch. Offer a cut-down version of your own proprietary database for free, primarily targeting developers and companies doing pilot implementations. The idea is to make it easy for people to develop on your platform, then charge like wounded bulls when they deploy in earnest.

    All of the major proprietary databases have free cut-down versions. Oracle Lite supports databases limited to 4 Gbytes of disk and 64 concurrent connections. Microsoft's Sql Server Express Edition supports one CPU only, 1 Gbyte of memory, and 4 Gbytes of disk. IBM's DB2 Express-C supports 2 processor cores, 2 Gbytes of memory, and unlimited disk.

    These database editions are free, but they are not open source. The pricing policy could change overnight. And, as outlined above, each has restrictions that limit their usefulness for deployment.

  • Strategy 3: Revenue Pull-Through. Include the database as a bundle with other pieces of your software stack. Focus the customer's attention on buying something else, and chances are they won't notice or won't care that they've bought your database as well.

  • Strategy 4: Business As Usual. If you wait long enough, perhaps open source databases will stumble or be acquired by someone. Maybe their fall will be as meteoric as their rise. Or maybe the Borg will show up and assimilate them before they build too much more momentum. Either way, it will be one less competitor to worry about.

    If you think a wait-and-see strategy sounds implausible, history shows that when they can't make up their minds how to respond, a lot of companies (and countries for that matter) do little more than sit on their hands. Australia's first ice skating gold medalist, Steven Bradbury, demonstrated how to win this way in spectacular fashion at the 2002 Winter Olympics. (Actually the last comparison is not entirely fair to Steven Bradbury - although he did win because the other contestants all stumbled, Bradbury's presence in the final was an achievement in itself that clearly demonstrated his ability and commitment.)

  • Strategy 5: Reduce Prices. Much of the imperative for a migration to OSDBs will be removed if proprietary database companies drop their prices significantly. The excellence of proprietary databases is certainly not under question - I can personally attest to the performance, scalability, rich feature set, and robustness of both Oracle and DB2, for example. The search for alternative databases is largely driven by the need for pricing relief. OPEC discovered in the 1970s that inflated oil prices led to both energy conservation and a search for alternative energy sources. When OPEC soon reduced prices again, much of the impetus behind alternative energy disappeared in the West (sadly).

    This strategy is only feasible if proprietary database companies derive most of their revenue from other sources. Oracle is probably the most vulnerable here.

My vote for the Strategy Most Likely To Succeed is a tie between Revenue Pull-Through and Reduce Prices. Oracle is arguably becoming the most successful proponent of the pull-through strategy. Oracle wants to supply you with a full software stack, including an OS, virtualization software, a broad range of middleware, a database, and end user applications. The largest component of Oracle's revenue currently still comes from database licenses, but the company is working hard to reduce that dependency. Until that happens, reducing prices across the board will be challenging for Oracle. If Oracle succeeds with a pull-through strategy, it doesn't mean that OSDBs will fail, of course. It simply means that Oracle is less likely to sustain major damage from their success.

Price reductions, if they are large enough and sustained enough, are likely to do more to slow down OSDB penetration. But I suspect that proprietary companies, if they are to do it at all, will need to reduce prices soon; if enough momentum builds around OSDBs we will reach a tipping point where it won't matter any more (witness the rise of Linux).

6. Conclusion

Are proprietary databases doomed, then? Not at all. Even if proprietary database companies pull no surprises, they won't fade away anytime soon. Too much legacy application software currently depends on them. Until ISV applications - like SAP's R/3, for example - support MySQL and PostgreSQL, end users will be wedded to proprietary databases. (Note, though, that SAP does support its own free and open source MaxDB database with R/3). As Oracle builds its software portfolio, too, more applications will ship with the Oracle database bundled. And for the forseeable future, proprietary databases will be the platform of choice for the largest mission-critical database deployments.

Make no mistake, though, open source databases are coming. For established companies it's more likely to be an evolution than a revolution. We will probably see a gradual OSDB surround, where new applications and deployments are increasingly based on OSDBs, driven by the cost savings. In emerging markets, though, it's looking more like a revolution. Last year I met with a small number of high-adrenaline companies in India, a market undergoing very rapid growth. They were openly dismissive of proprietary databases. One company had a small installation that was described as a "legacy" application due for replacement by an open source database. This is the scenario playing out today at high fliers like Google, Facebook, YouTube, and Flickr.

How can you take advantage of the rise of OSDBs? Here are some suggestions:

  • If you're considering a new database deployment, examine the possible cost savings of an OSDB.
  • If you're an established proprietary database user, don't simply throw out your database. Take the time to establish the feasibility and quantify the benefits of an OSDB solution before making a change.
  • If you're unhappy about the prices you're paying for database software, let your supplier know - the more senior the contact, the better. Suppliers do listen to their customers! As a side note, if you're a proprietary database customer looking at OSDBs, you don't need to make a big secret of it. I know of situations where proprietary database suppliers offered deep discounts to keep a customer away from OSDBs!

Perhaps the last word should go to The Economist. The following observations, reported in January 2002, may well prove prescient: "if software firms continue to think they can cash in on every new increase in computer performance, they will only encourage more and more customers to defect. And today, unlike a decade ago, open-source software has become just too good to be ignored."


What do you think? Feel free to let me know at Allan.Packer@Sun.COM.


I'm a Principal Engineer in the Performance Technologies group at Sun. My current role is team lead for the MySQL Performance & Scalability Project.


« August 2016