Engineered and General Purpose Systems
By jsavit on Oct 19, 2011
One thing I learned on joining Oracle is that the company likes to make a big splash at Oracle OpenWorld (though we did announce big items like the new T4 platform beforehand), and this year's event fit the pattern. (Oh yeah, before I get distracted: Solaris 11 is coming! Be there or be square!) This OOW highlighted the increased shift towards "engineered systems", a dramatic change in how systems will be designed and delivered. I've been working in this area for some time now, in particular with Exalogic, and want to share my impressions.
Current stateToday, many servers are high-touch systems designed and configured by the customer. This places the burden of integration on the customer, who has to invest substantial staff expertise on non-revenue generating efforts, and has to take on the risk of misconfiguration. Because it's so expensive in terms of risk, staff time and expertise, there's a strong incentive to hand craft just a few "standard model" configurations at a time. These models must be used until the next refresh and are often not optimal for any of the workloads they are expected to run.
Plus, since so many configuration and part selection details (this NIC, that amount of RAM using these DIMMs, those switches, at these OS and app software levels) exist only at that customer's site, customers risk discovering corner conditions because they are the only people in the world with that combination.
In contrast, engineered systems are designed to be optimal for a particular workload class, validated and proven by the vendor (that's us at Oracle, if you're still following) to be reliable, simple to purchase, configure and manage, and have dramatically superior performance for their target purpose. At the same time, these systems are built on industry-standard components rather than rare or exotic chips, in order to take advantage of price/performance advances.
This started with databases, unsurprising for Oracle, with Exadata the optimal platform for running Oracle RAC, and subsequently Exalogic for Java middleware and other applications. The idea has scaled to other workload types: Exadata has proven to be the premiere platform for both OLTP and DSS, and Exalogic provides dramatic performance improvements for applications, not exclusively in the Java app-server space it was first aimed at, but also for Peoplesoft, Siebel, JD Edwards, E-Business Suite and Tuxedo.
New members of the family show that the architectural concept scales further: Exalytics for on-line data analytics, and SPARC SuperCluster. Oracle's engineered systems are on both Solaris and Oracle Linux, on both x86 and SPARC. This is a concept that has legs.
The most visible selling point for these systems is performance. Unlike general-purpose platforms, engineered systems are, well, engineered for a purpose. Rather than designed to be adequate for everything, they are built to provide outstanding performance for a selected category of work.
For example, Exadata is designed for databases, so it has a tremendous amount of disk I/O capacity, using SSD devices for optimal latency and IOPS, backed by rotating media for capacity. I/O is done by storage nodes to offload I/O work from compute nodes, connected via a 40Gb Infiniband network for lowest latency and highest bandwidth. Unique optimizations yield further performance gains: Storage cells take on part of the burden of selecting rows ("Exadata Smart Scan") rather than blindly transmit all data to compute nodes just so unneeded rows can be discarded at the destination. Another Exadata optimization. hybrid columnar compression, uses column value compression to reduce disk space requirements. Consider a database with a LASTNAME column: you might have a lot of "Johnson" values in column order. Compressing common values saves disk space and reduces disk I/O time.
In contrast, Exalogic is designed for the application "middle tier" (between presentation and data persistence), and therefore has different requirements. For example, Java performance is very much affected by RAM speed and quantity, so compute node processors are configured with the maximum RAM that can be deployed - consistent with the memory needs of a JVM - without sacrificing RAM latency. Performance of modern applications is typically constrained by network latency - consider how Java application servers transmit changed state between nodes, so Exalogic is configured with the same Infiniband network as Exadata and has optimized database access. Further - and an advantage of Oracle owning the software and hardware stack - Weblogic and other application products have specific optimizations for Exalogic that reduce kernel pathlength for network access.
These are just a few examples of the "special sauce" that let different parts of the Oracle hardware and software stack combine for better performance and management. This is a blog entry, not a book (not yet, at least) so I have to restrain myself a little.
Arguably the biggest benefit is a less exotic one: these systems are built for balanced performance. So many times I've seen systems (on many platform types) with unbalanced configurations: They might have excess CPU but were hopelessly I/O bound - and the CPUs spent all their time waiting. Or they had plenty of I/O and CPU, but not enough RAM. Understanding workload characteristics so you can build systems that can scale as work grows - it's not so easy. With engineered systems we've been able to create systems that don't run into system bottlenecks due to unbalanced capacity.
The published results show performance that is in many cases several times better than comparable kit (similar chip and clock speeds - we're not gaming the system with 2011 gear compared to antiques). This works.
Faster networking eliminates limits of horizontal scale
The biggest constraint on performance in many networked applications is (duh...) network latency. Exa products essentially solve this problem by using Infiniband connections for low latency, high bandwidth interconnects. The Infiniband fabric provides the kind of bandwidth and latency you would previously see on the backplane within a server. Exadata and Exalogic systems can be configured with up to 8 full racks of servers, each with many compute nodes, on a single Infiniband network. Software optimizations bypass the kernel TCP/IP stack to put data directly on the wire and prevent CPU becoming the bottleneck.
This removes the primary traditional constraint on horizontal scale - delay caused by the "chattiness" between computers hosting a networked application. When applications on a network can talk to one another with latencies that approximate RAM DMA times (indeed, access can be categorized as "remote DMA") then you can for the first time link together many systems with linear scale.
Performance is an infinite source of computing fun, but it's not always the most important issue. Real world pain points are often about complexity and management, rather than speeds and feeds. The first part is getting rid of the 6 month science project that starts when a pile of components shows up on the loading dock, replacing it with a system that can literally be up and running in a day. The entire platform is integrated and tested at the factory. Components and assembly at the customer site is the same as at the support center and product engineering. This cuts part and configuration-based problems, and ensures that problems discovered on-site can be reproduced at the factory.
On an ongoing basis, the benefit is a system where you can manage and monitor everything from apps down to storage from a single browser window - with multiple nodes seamlessly managed as one system at different levels of abstraction. That's provided by Oracle Enterprise Manager, which lets you manage networked systems as a coordinated whole - "the network is the computer". Catchy, huh? But this time, all the way up to the application level where business value resides. This is also the foundation for a complete cloud lifecycle which would have virtual system slicing, self service, assembly deployment, automatic scale up, scale down, metering and chargeback. Heady capabilities.
Some people have referred to Exa* products as a "new version of the mainframe". I get the "it's been tested and purchased together" aspect, but that's been possible in open systems where the option to buy preconfigured reference architecture implementations has always been available (if not always used). The scalable systems aspect also is understandable, but open systems platforms have outscaled mainframes in most aspects for many years. But, okay - engineered systems have properties that can be compared to mainframes.
The analogy falls apart elsewhere: mainframes are general purpose systems that quite easily can have unbalanced performance (this is not intended to be partisan - I'm not attacking it, just pointing out that it can be just as easily configured unbalanced as any other platform. Much of my career was on mainframe systems fighting problems due to unbalanced performance). The other difference is that Oracle engineered systems are built from standard platform components: they run Oracle Linux or Solaris on x86 or (in SPARC SuperCluster) SPARC processors. They run standard application APIs and components, like Java application servers based on Weblogic Server. So, there's no lock-in to proprietary hardware or APIs or operating systems that (for whatever merits they might have) look like no other systems and have high barriers to exit.
Do General Purpose Systems Disappear?
I don't think GP or "non-engineered" systems go away. Systems are often purchased to support a variety of workloads which may not be fully known in advance, and not everybody will buy into the concept of engineered systems. There will also need to be component systems to build from - so "best of breed" systems will be around for a long time. Still, it's going to be an easier choice to run engineered systems proven to work reliably and at scale for known and important workloads.