It happened so slowly, most people didn't notice until it was over.
I'm speaking, of course, of the rise of general-purpose computing during the 1990s. It was not so long ago that you could choose from a truly bewildering variety of machines. Symbolics, for example, made hardware specifically designed to run Lisp programs. We debated SIMD vs. MIMD, dataflow vs. control flow, VLIW, and so on. Meanwhile, those boring little PCs just kept getting faster. And more capable. And cheaper. By the end of the decade, even the largest supercomputers were just clusters of PCs. A simple, general-purpose computing device crushed all manner of clever, sophisticated, highly specialized systems.
And the thing is, it had nothing to do with technology. It was all about volume economics. It was inevitable.
With that in mind, I bring news that is very good for you, very good for Sun, and not so good for our competitors: the same thing that happened to compute in the 1990s is happening to storage, right now. Now, as then, the fundamental driver is volume economics, and we see it playing out at all levels of the stack: the hardware, the operating system, and the interconnect.
First, custom RAID hardware can't keep up with general-purpose CPUs. A single Opteron core can XOR data at about 6 GB/sec. There's just no reason to dedicate special silicon to this anymore. It's expensive, it wastes power, and it was always a compromise: array-based RAID can't provide the same end-to-end data integrity that host-based RAID can. No matter how good the array is, a flaky cable or FC port can still flip bits in transit. A host-based RAID solution like RAID-Z in ZFS can both detect and correct silent data corruption, no matter where it arises.
Second, custom kernels can't keep up with volume operating systems. I try to avoid naming specific competitors in this blog -- it seems tacky -- but think about what's inside your favorite storage box. Is it open source? Does it have an open developer community? Does it scale? Can the vendor make it scale? Do they even get a vote?
The latter question is becoming much more important due to trends in CPU design. The clock rate party of the 1990s, during which we went from 20MHz to 2GHz -- a factor of 100 -- is over. Seven years into the new decade we're not even 2x faster in clock rate, and there's no sign of that changing soon. What we are getting, however, is more transistors. We're using them to put multiple cores on each chip and multiple threads on each core (so the chip can do something useful during load stalls) -- and this trend will only accelerate.
Which brings us back to the operating system inside your storage device. Does it have any prayer of making good use of a 16-core, 64-thread CPU?
Third, custom interconnects can't keep up with Ethernet. In the time that Fibre Channel went from 1Gb to 4Gb -- a factor of 4 -- Ethernet went from 10Mb to 10Gb -- a factor of 1000. That SAN is just slowing you down.
Today's world of array products running custom firmware on custom RAID controllers on a Fibre Channel SAN is in for massive disruption. It will be replaced by intelligent storage servers, built from commodity hardware, running an open operating system, speaking over the real network.
You've already seen the first instance of this: Thumper (the x4500) is a 4-CPU, 48-disk storage system with no hardware RAID controller. The storage is all managed by ZFS on Solaris, and exported directly to your real network over standard protocols like NFS and iSCSI.
And if you think Thumper was disruptive, well... stay tuned.