Monday Oct 13, 2008

Sun's 4-chip CMT system raises the bar

Find out about Sun's new 4-chip UltraSPARC T2 Plus system direct from the source: Sun's engineers.

Sun today announced the 4-chip variant of its UltraSPARC T2 Plus system, the Sun SPARC Enterprise T5440. This new system is the big brother of the 2-chip Sun SPARC Enterprise T5140 and T5240 systems released in April 2008. Each UltraSPARC T2 Plus chip offers 8 hardware strands in each of 8 cores. With up to four UltraSPARC T2 Plus chips delivering a total of 32 cores and 256 hardware threads and up to 512Gbytes of memory in a compact 4U package, the T5440 raises the bar for server performance, price-performance, energy efficiency, and compactness. And with Logical Domains (LDoms) and Solaris Containers, the potential for server consolidation is compelling.

Standard configurations of the Sun SPARC Enterprise T5440 include 2- and 4-chip systems at 1.2 GHz, and a 4-chip system at 1.4 GHz. All of these configurations come with 8 cores per chip.

The blogs posted today by various Sun engineers offer a broad perspective on the new system. The system design, the various hardware subsystems, the performance characteristics, the application experiences - it's all here! And if you'd like some background on how we arrived at this point, check out the earlier UltraSPARC T2 blogs (CMT Comes of Age) and the first release of the UltraSPARC T2 Plus (Sun's CMT goes multi-chip).

Let's see what the engineers have to say (and more will be posted throughout the day):

For more information on the new Sun SPARC Enterprise T5440 server, check out this web page.

Sunday Oct 12, 2008

Sizing a Sun Enterprise SPARC T5440 Server

Today Sun released the Sun Enterprise UltraSPARC T5440 server, a wild beast caged in a tiny 4U package. Putting it into perspective, this system has roughly the same performance potential as four Enterprise 10000 (Starfire) systems. Compared to the T5440, the floor space, energy consumption, and cooling required by the four older Starfire systems doesn't bear thinking about, either.

In more modern terms, the T5440 will handily outperform two Sun Fire E2900 systems with 12 dual-core UltraSPARC IV+ chips. Not bad for just four UltraSPARC T2 Plus chips. And when you add in up to 512Gbytes of memory and plenty of I/O connectivity, that's a lot of system.

Using up all that horsepower gets to be an interesting challenge. There are some applications that can consume the entire system, as demonstrated by some of the benchmarks published today. But for the most part, end users will be expecting to find other ways of deploying the considerable resources delivered by the system. Let's take a brief look at some of the issues to consider.

The first important factor is that the available resource is delivered in the form of many hardware threads. A four-chip T5440 delivers 32 cores with a whopping 256 hardware threads, and the Operating System in turn sees 256 CPUs. Each "CPU" has lower single-thread performance than many other current CPU chip implementations. But the capacity to get work done is enormous. For a simple analogy, consider water. One drop of water can't do a lot of damage. But that same drop together with a bunch of its friends carved out the Grand Canyon. (The UltraSPARC T1 chip was not codenamed "Niagara" for nothing!)

For applications that are multi-threaded, multi-process, or multi-user, you can spread the work across the available threads/CPUs. But don't expect an application to show off the strengths of this platform if it has heavy single-threaded dependencies. This is true for all systems based on the UltraSPARC T1, T2, and T2 Plus chip.

The good news is that people are starting to understand this message. When Sun first released the UltraSPARC T1 chip back in December 2005, it was a bit of a shock to many people. The Sun Fire T1000 and T2000 systems were the first wave of a new trend in CPU design, and some took a while to get their heads around the implications. Now, with Intel, AMD, and others jumping on the bandwagon, the stakes have become higher. And the rewards will flow quickest to those application developers who had the foresight to get their act together earliest.

The second factor is virtualization. Once again, people today understand the benefits of consolidating a larger number of smaller systems onto a fewer number of larger systems using virtualization technologies. The T5440 is an ideal platform for such consolidations. With its Logical Domain (LDom) capabilities and with Solaris Containers, there are many effective ways to carve the system up into smaller logical pieces.

And then there's the even simpler strategy of just throwing a bunch of different applications onto the system and letting Solaris handle the resource management. Solaris actually does an excellent job of managing such complex environments.

Summing it up, the T5440 is made to handle large workloads. As long as you don't expect great throughput if you're running a single thread, you should find it has a lot to offer in a small package.

Tuesday Apr 08, 2008

Sun's CMT goes multi-chip

Sun engineers blog on the new multi-chip UltraSPARC T2 Plus systems

Today Sun is announcing new CMT-based systems, hard on the heels of the UltraSPARC T2 systems launched in October 2007 (the Sun SPARC Enterprise T5120 and T5220 systems). Whereas previous Sun CMT systems were based around a single-socket UltraSPARC T1 or T2 processor, the new systems incorporate two processors, doubling the number of cores and the number of hardware threads compared to UltraSPARC T2-based systems. Each UltraSPARC T2 Plus chip includes 8 hardware strands in each of 8 cores, so the Operating System sees a total of 128 CPUs. The new systems deliver an unprecedented amount of CPU capacity in a package this size, as evidenced by the very impressive benchmark results published today.

Systems come in both 1U and 2U packaging: the 1U Sun SPARC Enterprise T5140 ships with two UltraSPARC T2 Plus chips, each with 4, 6, or 8 cores at 1.2 GHz, and the 2U Sun SPARC Enterprise T5240 ships with two UltraSPARC T2 Plus chips, each with 6 or 8 cores at 1.2 GHz, or 8 cores at 1.4 GHz. For more information about the systems, a whitepaper is available which provides details on the processors and systems.

Once again, some of the engineers who have worked on these new systems have shared their experiences and insights in a series of wide-ranging blogs (for engineers' perspectives on the UltraSPARC T2 systems, check out the CMT Comes of Age blog). These blogs will be cross referenced here as they are posted. You should expect to see more appear in the next day or two, so plan on visiting again later to see what's new.

Here's what the engineers have to say:

  • UltraSPARC T2 Plus Server Technology. Tim Cook offers insights into what drove processor design toward CMT. Marc Hamilton serves up a brief overview of CMT for those less familiar with the technology. Dwayne Lee touches on the UltraSPARC T2 Plus chip. Josh Simons offers us a look under the hood of the new servers. Denis Sheahan provides an overview of the hardware components of the UltraSPARC T2 Plus sytems, then follows it up with details of the memory and coherency of the UltraSPARC T2 Plus processor. Lawrence Spracklen introduces the crypto acceleration on the chip. Richard Elling talks about RAS (Reliability, Availability, and Serviceability) in the systems, and Scott Davenport describes their predictive self-healing features.
  • Virtualization. Honglin Su announces the availability of the Logical Domains 1.0.2 release, which supports the UltraSPARC T2 Plus platforms. Eric Sharakan offers further observations on LDoms on T5140 and T5240. Ning Sun discusses a study designed to show how LDoms with CMT can improve scalability and system utilization, and points to a Blueprint on the issue.
  • Solaris Features. Steve Sistare outlines some of the changes made to Solaris to support scaling on large CMT systems.
  • System Performance. Peter Yakutis offers insights into PCI-Express performance. Brian Whitney shares some Stream benchmark results. Alan Chiu explains 10 Gbit Ethernet perfomance on the new systems. Charles Suresh gives some fascinating background into how line speed was achieved on the 10 GBit Ethernet NICs.
  • Application Performance. What happens when you run Batch Workloads on a Sun CMT server? Giri Mandalika's blog shares experiences running the Oracle E-Business Suite Payroll 11i workload. You might also find Satish Vanga's blog interesting - it focusses on SugarCRM running on MySQL (on a single-socket T5220). Josh Simons reveals the credentials of the new systems for HPC applications and backs it up by pointing to a new SPEComp2001 world record. Joerg Schwarz considers the applicability of the UltraSPARC T2 Plus servers for Health Care applications.
  • Web Tier. CVR explores the new World Record SPECweb2005 result on the T5220 system, and Walter Bays teases out the subtleties of the SPEC reporting process.
  • Java Performance. Dave Dagastine announces a World Record SPECjbb2005 result.
  • Benchmark Performance. The irrepressible bmseer details a number of world record results, including SPECjAppServer2004, SAP-SD 2 tier, and SPECjbb2005.
  • Open Source Community. Josh Berkus explores the implications for PostgreSQL using virtualization on the platform. Jignesh Shah discusses the possibilities with Glassfish V2 and PostgreSQL 8.3.1 on the T5140 and T5240 systems.
  • Sizing. Walter Bays introduces the CoolThreads Selection Tool (cooltst) v3.0 which is designed to gauge how well workloads will run on UltraSPARC T2 Plus systems.

Check out also the Sun CMT Wiki.

Monday Dec 10, 2007

Are Proprietary Databases Doomed?

Times of change are upon the database market. The major established database companies are being challenged by open source upstarts like MySQL and PostgreSQL. For years, Open Source Databases (OSDBs) have been quietly increasing their penetration, but until recently they have lacked the capabilities to seriously threaten proprietary databases like Oracle, IBM's DB2, and Microsoft's SQL Server.

All that has changed. OSDBs now boast the necessary features and robustness to support commercial databases hundreds of Gigabytes in size. And a growing trickle of competitive benchmark results shows them performing more than acceptably well against their better-established cousins, while offering significant benefits in Total Cost of Ownership (TCO).

What does this mean for proprietary databases? Are they doomed? And more importantly, are there opportunities for end users to benefit from the rise of OSDBs? I will explore these topics in a multi-part blog:

  1. Feature Stagnation In The Traditional Database Market
  2. License Costs: the Soft Underbelly of Proprietary Databases
  3. The Looming Open Source Database Tsunami
  4. The Perfect Storm for Proprietary Databases
  5. Proprietary Counter Strategies
  6. Conclusion
The standard disclaimer applies as always: these are my opinions and not necessarily those of Sun or anyone else.

1. Feature Stagnation In The Traditional Database Market

When I joined Sun in the late 80s, choosing a database was still an important issue for end users. Large customers routinely issued tenders for databases as well as for computer systems, and, to help in the selection process, customers often staged performance bakeoffs between competing database vendors using home-grown benchmarks.

In the 90s, fierce competition led to a rapid explosion in features as well as dramatic improvements in performance. Sun in particular invested a lot of engineering effort in working with the major database companies to improve performance and scalability. At the same time, a variety of new technologies appeared, many claiming they would knock the relational database from its throne. Distributed databases, object relational, shared nothing, and in-memory database implementations all made cameo appearances. Relational databases simply absorbed their best features and continued to rule. Simple database features like triggers and stored procedures gave way to more sophisticated technologies like replication, online backup, and cluster support.

By the turn of the millenium, relational databases had already pretty much met the essential requirements of end users, and proprietary database companies were either pointing their vaccuum cleaners toward other interesting money piles, or losing the plot entirely and sailing off the edge of the world. Today, database releases continue to tout new features, but they're frosting on the cake rather than essentials. No-one issues a tender for a database unless they have unusual requirements. No-one loses their job because they chose the wrong database. And it's been that way for years.

Put very simply, the database has arguably become a commodity.

2. License Costs: the Soft Underbelly of Proprietary Databases

Databases may have become commodities, but selling them is still very profitable. Historically, as CPU performance increased with faster clock speeds, users continued to pay the same price for database licenses on the newer, more powerful systems. But that all changed as the industry moved to multi-core CPUs. As we will see, the licensing policies adopted by proprietary database companies have ensured that license charges have increased steeply as a result of this revolution in processor chip technology.

Some years ago, chip manufacturers began turning to multi-core CPU designs as a way of continuing to drive improvements in CPU performance. As it becomes more difficult to increase the transistor density of CPU chips and increase clock speeds, multi-core chips offer a simple alternative by packing more than one core on a single chip running at a lower clock speed. At the same time, proprietary database vendors began basing license charges on the number of cores in a system.

A practical example is Sun's dual-core UltraSPARC-IV chip. It replaced the single-core UltraSPARC-III chip at the same clock speed. By delivering two cores instead of one, the UltraSPARC-IV offered twice the performance of its predecessor. A typical system was the popular UltraSPARC-IV-based Sun Fire V490 which included four dual-core chips (eight cores). This system replaced the Sun Fire V480 with four single-core UltraSPARC-III chips. The customer received twice the CPU performance for the same hardware price.

Not so for the Oracle database price, though. Based on per-core licensing, the new system was now treated as an 8-core system instead of a 4-core system as previously. And worse, users were forced to a significantly more expensive database edition if they deployed systems with more than four cores. So, compared to the V480, the V490 now attracted a much higher per-core charge, on top of the requirement to pay for twice as many core licenses.

The following table illustrates the extraordinary windfall received by Oracle:

SystemV480V490
Chips44
Cores48
Relative Performance1.02.0
Relative Hardware Price1.01.0
Database Core Licenses Required48
Relative Core License Price1.02.7
Relative Database License Price1.05.3

In the face of considerable pushback from the industry, Oracle responded with "discounts" for the second core in a chip. In the case of the V490, that meant the discounted database price was still four times the license charge for the V480!

So while users continued to enjoy more powerful and more feature-rich hardware at the same or lower price, they were paying a lot more to use the same database software on the new hardware.

The situation prompted comments like those from Stephen Elliot, an enterprise systems analyst at IDC, who anticipated increased pressure on Oracle to be more flexible with pricing, and reported that "Oracle is becoming the lone force on the processor issue". Since that time, under pressure from Microsoft, which introduced per-chip database licensing back in Feb 2005, Oracle has continued to tweak its licensing model, but so far without making wholesale changes.

It should be noted that Oracle is not alone in following this path - similar anomalies apply to the pricing of IBM's DB2 database. Nor is Microsoft the only vendor to embrace a per-chip pricing strategy - as long ago as 2005, the Register reported that BEA was adopting the same per-socket licensing model as Microsoft and VMware, and noted that "the software maker's move puts it in prime fighting position against Oracle and IBM, which have been slow to adjust their pricing models for new chips from AMD, Intel and others." According to Ashlee Vance in July 2007, "Most software vendors have had the decency to settle on a per-socket basis for their pricing schemes, ignoring the number of cores per chip. Meanwhile, IBM and Oracle, the vendors with the most to lose, prefer to keep you in a state of pricing confusion."

Anecdotally, some companies are finding that database licenses have become their single biggest IT cost. The impact is probably greater on small and medium-sized companies that don't have the same ability to command the hefty discounts that larger companies typically enjoy from database vendors. A colleague related a story that illustrates the issue. His brother worked for a 200-person company that decided it was time to upgrade their database applications. They set out to deploy a well-known proprietary database until they discovered that the database license fee was going to exceed their entire current annual IT budget! They ended up deploying an open source database instead.

3. The Looming Open Source Database Tsunami

In August 2007, Tom Daly revealed the results of a SPECjAppServer2004 benchmark based on an entirely open source software stack, with PostgreSQL running on a Sun Fire T2000 server and the Glassfish Application Server on two Sun Fire X4200 servers. The announcement was revealing for two reasons:
  • It showed PostgreSQL capable of supporting more than 6,000 concurrent users on a commodity hardware platform
  • The benchmark result was within 10% of a published result from an HP/Oracle configuration costing more than three times as much. The major reason for the huge price difference was the cost of Oracle (at $110,000 compared to PostgreSQL $0).

    For the record, it should be noted that the SPECjAppServer2004 benchmark does not include a pricing metric, so these are not official prices. Nonetheless, since benchmark configurations clearly cost actual money, it seems reasonable to assess the prices involved. In this case, all hardware and software prices were drawn from publicly-available sources.

Open source databases still do not scale as well as proprietary databases, but they now perform well enough to manage a broad range of challenging applications. End users who previously saw OSDBs as primarily suitable for simple low-volume applications are now able to reasonably consider them for departmental database server deployments.

Do OSDBs have the features needed for serious deployment, though? 12 months ago, Forrester Research released a report suggesting that eighty percent of applications typically only use 30 per cent of the features found in commercial databases, and that the open source databases deliver those features today. While Forrester noted that OSDBs still lag for mission critical applications, those holes are likely to be plugged as bigger players announce 24x7 technical support and service (as has already happened, for example, with PostgreSQL).

A recent survey of Oracle users showed 20% having open source databases larger than 50 Gbytes and two thirds citing cost as the driver to adoption of open source. Open source database adoption is still relatively small. Does that mean OSDBs should be dismissed? Not according to a Gartner analyst, quoted last year as saying "We think it is a big deal. Granted, in the DBS market right now, they are very small players. Remember about 10 years ago, Linux in the market was a very small player? Not so much, anymore."

The comparison may be apt. With the combination of essential features, improved performance, robust support, and compelling price, OSDBs today bear a striking resemblence to Linux a few years ago. Many believe that the wave looming on the horizon is a tsunami. Time alone will tell.

4. The Perfect Storm for Proprietary Databases

Underneath major end-user applications like ERP and data warehouse software, every major hardware and software component is now subject to commodity pricing. As we have seen, each year hardware prices continue to decline while processing power increases. At the same time, leading operating systems like Sun's Solaris and Linux are now open source and can be deployed for free. The same is true of most other components of the software stack, including virtualization software, databases, application servers, web servers, and other middleware. Even Sun's UltraSPARC T1 and T2 chips and RTL have been open sourced, allowing community members to build on proven hardware at a much lower cost.

Why, then, is proprietary database software becoming more expensive while everything else reduces in price? End users normally expect to benefit from the cost savings resulting from improvements in technology. I am writing this blog, for example, on an affordable computer that would easily outperform expensive commercial systems from just 10 years ago.

It seems difficult to resist the conclusion that proprietary database companies have managed to redirect a good chunk of these savings away from end users and into their own coffers. Successful as this strategy has been, though, it could ultimately backfire. The more expensive proprietary databases become, the more attractive lower cost alternatives appear.

A number of forces are currently at work in the market:

  • The momentum around open source software has continued to build, and many open source products, while not as capable as proprietary alternatives, have become "good enough" to replace them. This is also true of open source databases.
  • Large technology suppliers are beginning to bundle OSDBs, with the result that customers are able to take out support contracts with established companies as well as startups. Sun, for example, ships and supports PostgreSQL.
  • Benchmarks are beginning to feature OSDBs. Thus OSDBs are starting down a path that has been trod by proprietary databases over many years. Benchmarks can be expected to highlight the capabilities of OSDBs, accelerate the process of OSDB performance improvement, and, increasingly, expose the price difference between OSDBs and proprietary databases. And as the scalability of OSDBs increases, benchmarks will be published on larger systems, opening up an even wider gap in database pricing.
  • The sweet spot in the hardware market is a two- to four-chip server. With the advent of quad-core chips from Intel and AMD, and the 8-core UltraSPARC T1 and T2 chips released by Sun, such systems have become powerful enough to carry out processing that required much larger and more expensive systems in the past. The pricing chasm between low cost hardware and high cost proprietary databases continues to widen.

    For example, the UltraSPARC T1-based servers, the Sun Fire T1000 and T2000, shipped with 8 cores on a single chip. The recently-released single-chip, 8-core UltraSPARC T2 servers, which deliver twice the performance at roughly the same cost, now attract a DB2 license charge that has increased 66% compared to the T1 platforms. So IBM has taken the opportunity to significantly hike the price of the database software on that platform, even though the number of chips and number of cores has not changed.

    At the same time, 8-core UltraSPARC T1 servers attracted a 2-core license charge from Oracle (a very reasonable pricing decision on Oracle's part - see this table which is referenced from here). At the time of writing, Oracle has not announced a final decision on pricing for the T2 platform, but it is unlikely that Oracle will be able to resist the temptation to emulate or even outdo the cynicism of IBM. Why such a pessimistic expectation? Because the same table referred to above announces that the 1.4GHz UltraSPARC version of the older T1 servers will be subject to a 0.5 multiplier instead of the 0.25 multiplier that applies to the 1.0 GHz and 1.2 GHz versions of the same platform. So a simple 17% clock speed increase in the hardware, with no other changes at all, prompted Oracle to double the license charge!

    To be fair to Oracle, since the UltraSPARC T1 and T2 platforms are single chip systems, customers can purchase Standard Edition and Standard Edition One licenses for them at greatly reduced prices. But these editions do not offer the full Oracle feature set, and in particular they do not offer the parallel processing capabilities essential for efficient processing on the 32-way T1 and 64-way T2 platforms; if customers want parallel capabilities they must purchase the vastly more expensive Enterprise Edition with its core-based licensing and inexplicable and inconsistent "discounts".

  • Web 2.0 is gathering momentum, and the new breed of companies leading the charge are largely ignoring proprietary databases for their deployments. Probably much of the scepticism relates to the cost implications.

In the past there has been no real alternative to proprietary databases. That has changed, at least for new applications that have no legacy database dependencies. The relative proportion of hardware and software purchase prices has been changing, too, and for some years software has gradually been consuming ever-larger slices of the pie. In recent times the pace has accelerated, though, as commodity-priced hardware has become much more powerful and database prices have increased.

For the most part, customers still have not entirely woken up to these trends. But if they ever do, we may see a significant shift in the database market. Is it possible that the perfect storm for proprietary databases is brewing?

5. Proprietary Counter Strategies

Proprietary Database companies are not without ways of responding to the challenge from OSDBs. Here are a few possibilities.
  • Strategy 1: Resistance is Futile - You Will Be Assimilated. Picking off your competitors can get a lot easier when they are open source companies, because most of them struggle to address a major discrepancy between their penetration and their annual revenue. Putting it another way, they have plenty of users but very little revenue to show for it. Hey, their product is free, after all! So far, not many people have figured out how to become absurdly rich by giving away software.

    Buying a competitor gives you access to their Intellectual Property (IP) and their customer base. Sometimes it simply eliminates a competitor from the marketplace. Either way, playing the Borg can be an effective way of reshaping the market in your favor.

    Note that Oracle has already made some raids across the border, having acquired InnoBase, maker of InnoDB, MySQL's most popular transactional engine, and Sleepycat Software, maker of Berkeley DB, another transactional engine used with MySQL. In response, MySQL has scrambled to introduce Falcon, a transactional database engine of its own.

    Any of the major proprietary database companies could reasonably play the role of the Borg in this scenario, though, since all of them have very deep pockets. MySQL is probably the most vulnerable to takeover, since it's privately held. PostgreSQL may be more difficult to silence, since it is developed by an active community rather than a single company. But in either case, even if you pick off the company or the key community contributors, you haven't removed the IP from the market because the database is open source.

  • Strategy 2: Bait And Switch. Offer a cut-down version of your own proprietary database for free, primarily targeting developers and companies doing pilot implementations. The idea is to make it easy for people to develop on your platform, then charge like wounded bulls when they deploy in earnest.

    All of the major proprietary databases have free cut-down versions. Oracle Lite supports databases limited to 4 Gbytes of disk and 64 concurrent connections. Microsoft's Sql Server Express Edition supports one CPU only, 1 Gbyte of memory, and 4 Gbytes of disk. IBM's DB2 Express-C supports 2 processor cores, 2 Gbytes of memory, and unlimited disk.

    These database editions are free, but they are not open source. The pricing policy could change overnight. And, as outlined above, each has restrictions that limit their usefulness for deployment.

  • Strategy 3: Revenue Pull-Through. Include the database as a bundle with other pieces of your software stack. Focus the customer's attention on buying something else, and chances are they won't notice or won't care that they've bought your database as well.

  • Strategy 4: Business As Usual. If you wait long enough, perhaps open source databases will stumble or be acquired by someone. Maybe their fall will be as meteoric as their rise. Or maybe the Borg will show up and assimilate them before they build too much more momentum. Either way, it will be one less competitor to worry about.

    If you think a wait-and-see strategy sounds implausible, history shows that when they can't make up their minds how to respond, a lot of companies (and countries for that matter) do little more than sit on their hands. Australia's first ice skating gold medalist, Steven Bradbury, demonstrated how to win this way in spectacular fashion at the 2002 Winter Olympics. (Actually the last comparison is not entirely fair to Steven Bradbury - although he did win because the other contestants all stumbled, Bradbury's presence in the final was an achievement in itself that clearly demonstrated his ability and commitment.)

  • Strategy 5: Reduce Prices. Much of the imperative for a migration to OSDBs will be removed if proprietary database companies drop their prices significantly. The excellence of proprietary databases is certainly not under question - I can personally attest to the performance, scalability, rich feature set, and robustness of both Oracle and DB2, for example. The search for alternative databases is largely driven by the need for pricing relief. OPEC discovered in the 1970s that inflated oil prices led to both energy conservation and a search for alternative energy sources. When OPEC soon reduced prices again, much of the impetus behind alternative energy disappeared in the West (sadly).

    This strategy is only feasible if proprietary database companies derive most of their revenue from other sources. Oracle is probably the most vulnerable here.

My vote for the Strategy Most Likely To Succeed is a tie between Revenue Pull-Through and Reduce Prices. Oracle is arguably becoming the most successful proponent of the pull-through strategy. Oracle wants to supply you with a full software stack, including an OS, virtualization software, a broad range of middleware, a database, and end user applications. The largest component of Oracle's revenue currently still comes from database licenses, but the company is working hard to reduce that dependency. Until that happens, reducing prices across the board will be challenging for Oracle. If Oracle succeeds with a pull-through strategy, it doesn't mean that OSDBs will fail, of course. It simply means that Oracle is less likely to sustain major damage from their success.

Price reductions, if they are large enough and sustained enough, are likely to do more to slow down OSDB penetration. But I suspect that proprietary companies, if they are to do it at all, will need to reduce prices soon; if enough momentum builds around OSDBs we will reach a tipping point where it won't matter any more (witness the rise of Linux).

6. Conclusion

Are proprietary databases doomed, then? Not at all. Even if proprietary database companies pull no surprises, they won't fade away anytime soon. Too much legacy application software currently depends on them. Until ISV applications - like SAP's R/3, for example - support MySQL and PostgreSQL, end users will be wedded to proprietary databases. (Note, though, that SAP does support its own free and open source MaxDB database with R/3). As Oracle builds its software portfolio, too, more applications will ship with the Oracle database bundled. And for the forseeable future, proprietary databases will be the platform of choice for the largest mission-critical database deployments.

Make no mistake, though, open source databases are coming. For established companies it's more likely to be an evolution than a revolution. We will probably see a gradual OSDB surround, where new applications and deployments are increasingly based on OSDBs, driven by the cost savings. In emerging markets, though, it's looking more like a revolution. Last year I met with a small number of high-adrenaline companies in India, a market undergoing very rapid growth. They were openly dismissive of proprietary databases. One company had a small installation that was described as a "legacy" application due for replacement by an open source database. This is the scenario playing out today at high fliers like Google, Facebook, YouTube, and Flickr.

How can you take advantage of the rise of OSDBs? Here are some suggestions:

  • If you're considering a new database deployment, examine the possible cost savings of an OSDB.
  • If you're an established proprietary database user, don't simply throw out your database. Take the time to establish the feasibility and quantify the benefits of an OSDB solution before making a change.
  • If you're unhappy about the prices you're paying for database software, let your supplier know - the more senior the contact, the better. Suppliers do listen to their customers! As a side note, if you're a proprietary database customer looking at OSDBs, you don't need to make a big secret of it. I know of situations where proprietary database suppliers offered deep discounts to keep a customer away from OSDBs!

Perhaps the last word should go to The Economist. The following observations, reported in January 2002, may well prove prescient: "if software firms continue to think they can cash in on every new increase in computer performance, they will only encourage more and more customers to defect. And today, unlike a decade ago, open-source software has become just too good to be ignored."

Allan

What do you think? Feel free to let me know at Allan.Packer@Sun.COM.

About

I'm a Principal Engineer in the Performance Technologies group at Sun. My current role is team lead for the MySQL Performance & Scalability Project.

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today