Monday Dec 10, 2007

Are Proprietary Databases Doomed?

Times of change are upon the database market. The major established database companies are being challenged by open source upstarts like MySQL and PostgreSQL. For years, Open Source Databases (OSDBs) have been quietly increasing their penetration, but until recently they have lacked the capabilities to seriously threaten proprietary databases like Oracle, IBM's DB2, and Microsoft's SQL Server.

All that has changed. OSDBs now boast the necessary features and robustness to support commercial databases hundreds of Gigabytes in size. And a growing trickle of competitive benchmark results shows them performing more than acceptably well against their better-established cousins, while offering significant benefits in Total Cost of Ownership (TCO).

What does this mean for proprietary databases? Are they doomed? And more importantly, are there opportunities for end users to benefit from the rise of OSDBs? I will explore these topics in a multi-part blog:

  1. Feature Stagnation In The Traditional Database Market
  2. License Costs: the Soft Underbelly of Proprietary Databases
  3. The Looming Open Source Database Tsunami
  4. The Perfect Storm for Proprietary Databases
  5. Proprietary Counter Strategies
  6. Conclusion
The standard disclaimer applies as always: these are my opinions and not necessarily those of Sun or anyone else.

1. Feature Stagnation In The Traditional Database Market

When I joined Sun in the late 80s, choosing a database was still an important issue for end users. Large customers routinely issued tenders for databases as well as for computer systems, and, to help in the selection process, customers often staged performance bakeoffs between competing database vendors using home-grown benchmarks.

In the 90s, fierce competition led to a rapid explosion in features as well as dramatic improvements in performance. Sun in particular invested a lot of engineering effort in working with the major database companies to improve performance and scalability. At the same time, a variety of new technologies appeared, many claiming they would knock the relational database from its throne. Distributed databases, object relational, shared nothing, and in-memory database implementations all made cameo appearances. Relational databases simply absorbed their best features and continued to rule. Simple database features like triggers and stored procedures gave way to more sophisticated technologies like replication, online backup, and cluster support.

By the turn of the millenium, relational databases had already pretty much met the essential requirements of end users, and proprietary database companies were either pointing their vaccuum cleaners toward other interesting money piles, or losing the plot entirely and sailing off the edge of the world. Today, database releases continue to tout new features, but they're frosting on the cake rather than essentials. No-one issues a tender for a database unless they have unusual requirements. No-one loses their job because they chose the wrong database. And it's been that way for years.

Put very simply, the database has arguably become a commodity.

2. License Costs: the Soft Underbelly of Proprietary Databases

Databases may have become commodities, but selling them is still very profitable. Historically, as CPU performance increased with faster clock speeds, users continued to pay the same price for database licenses on the newer, more powerful systems. But that all changed as the industry moved to multi-core CPUs. As we will see, the licensing policies adopted by proprietary database companies have ensured that license charges have increased steeply as a result of this revolution in processor chip technology.

Some years ago, chip manufacturers began turning to multi-core CPU designs as a way of continuing to drive improvements in CPU performance. As it becomes more difficult to increase the transistor density of CPU chips and increase clock speeds, multi-core chips offer a simple alternative by packing more than one core on a single chip running at a lower clock speed. At the same time, proprietary database vendors began basing license charges on the number of cores in a system.

A practical example is Sun's dual-core UltraSPARC-IV chip. It replaced the single-core UltraSPARC-III chip at the same clock speed. By delivering two cores instead of one, the UltraSPARC-IV offered twice the performance of its predecessor. A typical system was the popular UltraSPARC-IV-based Sun Fire V490 which included four dual-core chips (eight cores). This system replaced the Sun Fire V480 with four single-core UltraSPARC-III chips. The customer received twice the CPU performance for the same hardware price.

Not so for the Oracle database price, though. Based on per-core licensing, the new system was now treated as an 8-core system instead of a 4-core system as previously. And worse, users were forced to a significantly more expensive database edition if they deployed systems with more than four cores. So, compared to the V480, the V490 now attracted a much higher per-core charge, on top of the requirement to pay for twice as many core licenses.

The following table illustrates the extraordinary windfall received by Oracle:

SystemV480V490
Chips44
Cores48
Relative Performance1.02.0
Relative Hardware Price1.01.0
Database Core Licenses Required48
Relative Core License Price1.02.7
Relative Database License Price1.05.3

In the face of considerable pushback from the industry, Oracle responded with "discounts" for the second core in a chip. In the case of the V490, that meant the discounted database price was still four times the license charge for the V480!

So while users continued to enjoy more powerful and more feature-rich hardware at the same or lower price, they were paying a lot more to use the same database software on the new hardware.

The situation prompted comments like those from Stephen Elliot, an enterprise systems analyst at IDC, who anticipated increased pressure on Oracle to be more flexible with pricing, and reported that "Oracle is becoming the lone force on the processor issue". Since that time, under pressure from Microsoft, which introduced per-chip database licensing back in Feb 2005, Oracle has continued to tweak its licensing model, but so far without making wholesale changes.

It should be noted that Oracle is not alone in following this path - similar anomalies apply to the pricing of IBM's DB2 database. Nor is Microsoft the only vendor to embrace a per-chip pricing strategy - as long ago as 2005, the Register reported that BEA was adopting the same per-socket licensing model as Microsoft and VMware, and noted that "the software maker's move puts it in prime fighting position against Oracle and IBM, which have been slow to adjust their pricing models for new chips from AMD, Intel and others." According to Ashlee Vance in July 2007, "Most software vendors have had the decency to settle on a per-socket basis for their pricing schemes, ignoring the number of cores per chip. Meanwhile, IBM and Oracle, the vendors with the most to lose, prefer to keep you in a state of pricing confusion."

Anecdotally, some companies are finding that database licenses have become their single biggest IT cost. The impact is probably greater on small and medium-sized companies that don't have the same ability to command the hefty discounts that larger companies typically enjoy from database vendors. A colleague related a story that illustrates the issue. His brother worked for a 200-person company that decided it was time to upgrade their database applications. They set out to deploy a well-known proprietary database until they discovered that the database license fee was going to exceed their entire current annual IT budget! They ended up deploying an open source database instead.

3. The Looming Open Source Database Tsunami

In August 2007, Tom Daly revealed the results of a SPECjAppServer2004 benchmark based on an entirely open source software stack, with PostgreSQL running on a Sun Fire T2000 server and the Glassfish Application Server on two Sun Fire X4200 servers. The announcement was revealing for two reasons:
  • It showed PostgreSQL capable of supporting more than 6,000 concurrent users on a commodity hardware platform
  • The benchmark result was within 10% of a published result from an HP/Oracle configuration costing more than three times as much. The major reason for the huge price difference was the cost of Oracle (at $110,000 compared to PostgreSQL $0).

    For the record, it should be noted that the SPECjAppServer2004 benchmark does not include a pricing metric, so these are not official prices. Nonetheless, since benchmark configurations clearly cost actual money, it seems reasonable to assess the prices involved. In this case, all hardware and software prices were drawn from publicly-available sources.

Open source databases still do not scale as well as proprietary databases, but they now perform well enough to manage a broad range of challenging applications. End users who previously saw OSDBs as primarily suitable for simple low-volume applications are now able to reasonably consider them for departmental database server deployments.

Do OSDBs have the features needed for serious deployment, though? 12 months ago, Forrester Research released a report suggesting that eighty percent of applications typically only use 30 per cent of the features found in commercial databases, and that the open source databases deliver those features today. While Forrester noted that OSDBs still lag for mission critical applications, those holes are likely to be plugged as bigger players announce 24x7 technical support and service (as has already happened, for example, with PostgreSQL).

A recent survey of Oracle users showed 20% having open source databases larger than 50 Gbytes and two thirds citing cost as the driver to adoption of open source. Open source database adoption is still relatively small. Does that mean OSDBs should be dismissed? Not according to a Gartner analyst, quoted last year as saying "We think it is a big deal. Granted, in the DBS market right now, they are very small players. Remember about 10 years ago, Linux in the market was a very small player? Not so much, anymore."

The comparison may be apt. With the combination of essential features, improved performance, robust support, and compelling price, OSDBs today bear a striking resemblence to Linux a few years ago. Many believe that the wave looming on the horizon is a tsunami. Time alone will tell.

4. The Perfect Storm for Proprietary Databases

Underneath major end-user applications like ERP and data warehouse software, every major hardware and software component is now subject to commodity pricing. As we have seen, each year hardware prices continue to decline while processing power increases. At the same time, leading operating systems like Sun's Solaris and Linux are now open source and can be deployed for free. The same is true of most other components of the software stack, including virtualization software, databases, application servers, web servers, and other middleware. Even Sun's UltraSPARC T1 and T2 chips and RTL have been open sourced, allowing community members to build on proven hardware at a much lower cost.

Why, then, is proprietary database software becoming more expensive while everything else reduces in price? End users normally expect to benefit from the cost savings resulting from improvements in technology. I am writing this blog, for example, on an affordable computer that would easily outperform expensive commercial systems from just 10 years ago.

It seems difficult to resist the conclusion that proprietary database companies have managed to redirect a good chunk of these savings away from end users and into their own coffers. Successful as this strategy has been, though, it could ultimately backfire. The more expensive proprietary databases become, the more attractive lower cost alternatives appear.

A number of forces are currently at work in the market:

  • The momentum around open source software has continued to build, and many open source products, while not as capable as proprietary alternatives, have become "good enough" to replace them. This is also true of open source databases.
  • Large technology suppliers are beginning to bundle OSDBs, with the result that customers are able to take out support contracts with established companies as well as startups. Sun, for example, ships and supports PostgreSQL.
  • Benchmarks are beginning to feature OSDBs. Thus OSDBs are starting down a path that has been trod by proprietary databases over many years. Benchmarks can be expected to highlight the capabilities of OSDBs, accelerate the process of OSDB performance improvement, and, increasingly, expose the price difference between OSDBs and proprietary databases. And as the scalability of OSDBs increases, benchmarks will be published on larger systems, opening up an even wider gap in database pricing.
  • The sweet spot in the hardware market is a two- to four-chip server. With the advent of quad-core chips from Intel and AMD, and the 8-core UltraSPARC T1 and T2 chips released by Sun, such systems have become powerful enough to carry out processing that required much larger and more expensive systems in the past. The pricing chasm between low cost hardware and high cost proprietary databases continues to widen.

    For example, the UltraSPARC T1-based servers, the Sun Fire T1000 and T2000, shipped with 8 cores on a single chip. The recently-released single-chip, 8-core UltraSPARC T2 servers, which deliver twice the performance at roughly the same cost, now attract a DB2 license charge that has increased 66% compared to the T1 platforms. So IBM has taken the opportunity to significantly hike the price of the database software on that platform, even though the number of chips and number of cores has not changed.

    At the same time, 8-core UltraSPARC T1 servers attracted a 2-core license charge from Oracle (a very reasonable pricing decision on Oracle's part - see this table which is referenced from here). At the time of writing, Oracle has not announced a final decision on pricing for the T2 platform, but it is unlikely that Oracle will be able to resist the temptation to emulate or even outdo the cynicism of IBM. Why such a pessimistic expectation? Because the same table referred to above announces that the 1.4GHz UltraSPARC version of the older T1 servers will be subject to a 0.5 multiplier instead of the 0.25 multiplier that applies to the 1.0 GHz and 1.2 GHz versions of the same platform. So a simple 17% clock speed increase in the hardware, with no other changes at all, prompted Oracle to double the license charge!

    To be fair to Oracle, since the UltraSPARC T1 and T2 platforms are single chip systems, customers can purchase Standard Edition and Standard Edition One licenses for them at greatly reduced prices. But these editions do not offer the full Oracle feature set, and in particular they do not offer the parallel processing capabilities essential for efficient processing on the 32-way T1 and 64-way T2 platforms; if customers want parallel capabilities they must purchase the vastly more expensive Enterprise Edition with its core-based licensing and inexplicable and inconsistent "discounts".

  • Web 2.0 is gathering momentum, and the new breed of companies leading the charge are largely ignoring proprietary databases for their deployments. Probably much of the scepticism relates to the cost implications.

In the past there has been no real alternative to proprietary databases. That has changed, at least for new applications that have no legacy database dependencies. The relative proportion of hardware and software purchase prices has been changing, too, and for some years software has gradually been consuming ever-larger slices of the pie. In recent times the pace has accelerated, though, as commodity-priced hardware has become much more powerful and database prices have increased.

For the most part, customers still have not entirely woken up to these trends. But if they ever do, we may see a significant shift in the database market. Is it possible that the perfect storm for proprietary databases is brewing?

5. Proprietary Counter Strategies

Proprietary Database companies are not without ways of responding to the challenge from OSDBs. Here are a few possibilities.
  • Strategy 1: Resistance is Futile - You Will Be Assimilated. Picking off your competitors can get a lot easier when they are open source companies, because most of them struggle to address a major discrepancy between their penetration and their annual revenue. Putting it another way, they have plenty of users but very little revenue to show for it. Hey, their product is free, after all! So far, not many people have figured out how to become absurdly rich by giving away software.

    Buying a competitor gives you access to their Intellectual Property (IP) and their customer base. Sometimes it simply eliminates a competitor from the marketplace. Either way, playing the Borg can be an effective way of reshaping the market in your favor.

    Note that Oracle has already made some raids across the border, having acquired InnoBase, maker of InnoDB, MySQL's most popular transactional engine, and Sleepycat Software, maker of Berkeley DB, another transactional engine used with MySQL. In response, MySQL has scrambled to introduce Falcon, a transactional database engine of its own.

    Any of the major proprietary database companies could reasonably play the role of the Borg in this scenario, though, since all of them have very deep pockets. MySQL is probably the most vulnerable to takeover, since it's privately held. PostgreSQL may be more difficult to silence, since it is developed by an active community rather than a single company. But in either case, even if you pick off the company or the key community contributors, you haven't removed the IP from the market because the database is open source.

  • Strategy 2: Bait And Switch. Offer a cut-down version of your own proprietary database for free, primarily targeting developers and companies doing pilot implementations. The idea is to make it easy for people to develop on your platform, then charge like wounded bulls when they deploy in earnest.

    All of the major proprietary databases have free cut-down versions. Oracle Lite supports databases limited to 4 Gbytes of disk and 64 concurrent connections. Microsoft's Sql Server Express Edition supports one CPU only, 1 Gbyte of memory, and 4 Gbytes of disk. IBM's DB2 Express-C supports 2 processor cores, 2 Gbytes of memory, and unlimited disk.

    These database editions are free, but they are not open source. The pricing policy could change overnight. And, as outlined above, each has restrictions that limit their usefulness for deployment.

  • Strategy 3: Revenue Pull-Through. Include the database as a bundle with other pieces of your software stack. Focus the customer's attention on buying something else, and chances are they won't notice or won't care that they've bought your database as well.

  • Strategy 4: Business As Usual. If you wait long enough, perhaps open source databases will stumble or be acquired by someone. Maybe their fall will be as meteoric as their rise. Or maybe the Borg will show up and assimilate them before they build too much more momentum. Either way, it will be one less competitor to worry about.

    If you think a wait-and-see strategy sounds implausible, history shows that when they can't make up their minds how to respond, a lot of companies (and countries for that matter) do little more than sit on their hands. Australia's first ice skating gold medalist, Steven Bradbury, demonstrated how to win this way in spectacular fashion at the 2002 Winter Olympics. (Actually the last comparison is not entirely fair to Steven Bradbury - although he did win because the other contestants all stumbled, Bradbury's presence in the final was an achievement in itself that clearly demonstrated his ability and commitment.)

  • Strategy 5: Reduce Prices. Much of the imperative for a migration to OSDBs will be removed if proprietary database companies drop their prices significantly. The excellence of proprietary databases is certainly not under question - I can personally attest to the performance, scalability, rich feature set, and robustness of both Oracle and DB2, for example. The search for alternative databases is largely driven by the need for pricing relief. OPEC discovered in the 1970s that inflated oil prices led to both energy conservation and a search for alternative energy sources. When OPEC soon reduced prices again, much of the impetus behind alternative energy disappeared in the West (sadly).

    This strategy is only feasible if proprietary database companies derive most of their revenue from other sources. Oracle is probably the most vulnerable here.

My vote for the Strategy Most Likely To Succeed is a tie between Revenue Pull-Through and Reduce Prices. Oracle is arguably becoming the most successful proponent of the pull-through strategy. Oracle wants to supply you with a full software stack, including an OS, virtualization software, a broad range of middleware, a database, and end user applications. The largest component of Oracle's revenue currently still comes from database licenses, but the company is working hard to reduce that dependency. Until that happens, reducing prices across the board will be challenging for Oracle. If Oracle succeeds with a pull-through strategy, it doesn't mean that OSDBs will fail, of course. It simply means that Oracle is less likely to sustain major damage from their success.

Price reductions, if they are large enough and sustained enough, are likely to do more to slow down OSDB penetration. But I suspect that proprietary companies, if they are to do it at all, will need to reduce prices soon; if enough momentum builds around OSDBs we will reach a tipping point where it won't matter any more (witness the rise of Linux).

6. Conclusion

Are proprietary databases doomed, then? Not at all. Even if proprietary database companies pull no surprises, they won't fade away anytime soon. Too much legacy application software currently depends on them. Until ISV applications - like SAP's R/3, for example - support MySQL and PostgreSQL, end users will be wedded to proprietary databases. (Note, though, that SAP does support its own free and open source MaxDB database with R/3). As Oracle builds its software portfolio, too, more applications will ship with the Oracle database bundled. And for the forseeable future, proprietary databases will be the platform of choice for the largest mission-critical database deployments.

Make no mistake, though, open source databases are coming. For established companies it's more likely to be an evolution than a revolution. We will probably see a gradual OSDB surround, where new applications and deployments are increasingly based on OSDBs, driven by the cost savings. In emerging markets, though, it's looking more like a revolution. Last year I met with a small number of high-adrenaline companies in India, a market undergoing very rapid growth. They were openly dismissive of proprietary databases. One company had a small installation that was described as a "legacy" application due for replacement by an open source database. This is the scenario playing out today at high fliers like Google, Facebook, YouTube, and Flickr.

How can you take advantage of the rise of OSDBs? Here are some suggestions:

  • If you're considering a new database deployment, examine the possible cost savings of an OSDB.
  • If you're an established proprietary database user, don't simply throw out your database. Take the time to establish the feasibility and quantify the benefits of an OSDB solution before making a change.
  • If you're unhappy about the prices you're paying for database software, let your supplier know - the more senior the contact, the better. Suppliers do listen to their customers! As a side note, if you're a proprietary database customer looking at OSDBs, you don't need to make a big secret of it. I know of situations where proprietary database suppliers offered deep discounts to keep a customer away from OSDBs!

Perhaps the last word should go to The Economist. The following observations, reported in January 2002, may well prove prescient: "if software firms continue to think they can cash in on every new increase in computer performance, they will only encourage more and more customers to defect. And today, unlike a decade ago, open-source software has become just too good to be ignored."

Allan

What do you think? Feel free to let me know at Allan.Packer@Sun.COM.

Sunday Dec 09, 2007

Part 4: Are Proprietary Databases Doomed?

Click here for the full blog entry.

Saturday Dec 08, 2007

Part 3: Are Proprietary Databases Doomed?

Click here for the full blog entry.

Friday Dec 07, 2007

Part 2: Are Proprietary Databases Doomed?

Click here for the full blog entry.

Thursday Dec 06, 2007

Part 1: Are Proprietary Databases Doomed?

Click here for the full blog entry.

Sunday May 07, 2006

Consolidation Tool Release Info

CoolThreads Consolidation: The Easy Way

Solaris 10 and a CoolThreads server make a potent combination. Along with the raw horsepower, the low wattage, and the miserly rack requirements of the CoolThreads server, you get the robust, feature-rich, open source Solaris 10 operating system.

So far so good, but if you're new to Solaris 10 there's a lot to learn. Solaris Containers and Resource Management make a big difference for consolidation, but they take some getting used to. And then you need to figure out the implications of having four threads per core, for example, when configuring the Sun Fire T1000 or T2000. Is there a way of easing the transition for sysadmins who already have too much to think about?

Enter the Consolidation Tool for Sun Fire Servers V1.0, Sun Fire T1000 and T2000 Edition! This GUI tool is designed to simplify the task of consolidating applications onto CoolThreads servers. It also provides a friendly introduction to Solaris Containers, including Zones, resource pools, psets, and the FSS (FairShare) scheduling class.

Consolidation Tool Introduction

The Consolidation Tool is free (it's open source under the GPL), unsupported, and light-weight, and focused on Solaris 10 and the Sun Fire T1000/T2000. It's ideal for the systems administrator who is considering migrating applications from multiple Xeon boxes running Linux to a T2000 running Solaris 10. The tool can also be used to offer the technically-minded sysadmin a simple introduction to the command line syntax needed to build zones/pools/psets (via the commands script it creates). Note that the Consolidation Tool expects to work with a new Solaris installation - it does not attempt to manage systems that are already using containers.

Consolidation Tool Overview

The tool provides a simple, easy-to-use interface with context sensitive help. The user can choose between a Basic and an Expert mode, with the latter providing more control over the final configuration at the cost of greater complexity. Intelligent defaults are provided in both modes. The tool eases the user into defining and creating Solaris Containers without assuming any previous knowledge of that technology. It will deploy applications into processor sets where appropriate, and allocate "CPUs" (i.e. hardware threads) in a way that ensures all of a core's threads end up in the same processor set. The tool asks a series of user-friendly questions to determine whether to use full-root Zones, sparse Zones, or no Zones at all. The tool also optionally installs versions of key public domain software into the newly-established Zones on the target CoolThreads system.

The tool prepares a report summarizing the planned deployment, a commands script that is used to create the Zones, pools, and psets and install any specified public domain applications, and a file that stores the configuration data. This approach means that it isn't necessary to run the tool on the target CoolThreads system. Instead you can configure the consolidation environment in advance on a client of your choice. The final step is to run the commands script on the target Sun Fire T1000/T2000 system. Note that if you have elected within the tool to install public domain applications, you will need to put the full distribution onto the target system so that the script can find the public domain applications when you run it.

The tool can be run on any of the following client operating systems:

  • Solaris on SPARC
  • Solaris on x64/x86
  • Linux on Intel/AMD
  • MacOS X on PowerPC

Where Can I Get It?

You can find the tool on BigAdmin and also under Cool Tools at OpenSPARC.net. Both locations will let you download a presentation introducing the tool, and point you to the tool download location at the Sun Download Center. Download options include:
  • A 20MB tar.gz file, which provides the tool plus the necessary libraries for all clients, but none of the public domain packages
  • A 130MB tar.gz file with the full distribution, which includes the tool, its dependent libraries, and several public domain applications.
You can find it at the following locations: While you're at it, take the time to check out the OpenSPARC.net Cool Tools page, which also features other useful tools.

Feedback and Discussion

If you'd like to offer feedback on the Consolidation Tool, you can do so at consol-tool-feedback@sun.com. This is an auto-responder alias, so don't expect a reply (other than confirmation that your email has been received). If you would like to discuss the tool with other users, check out the Cool Tools Forum.

Thanks, Allan

About

I'm a Principal Engineer in the Performance Technologies group at Sun. My current role is team lead for the MySQL Performance & Scalability Project.

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today