“Storage I/O from DRAM is 1000x Faster Than Flash”. Well, sort of…

We on the Oracle ZFS Storage team had a lively debate recently about how to best capture the scope of how much better it is to serve I/O from DRAM than it is to serve it from flash. This debate came to a head over the following comment:

DRAM latency is 1 nanosecond, flash is 1 microsecond and disk is 1 millisecond….

All things being equal (and mathematically speaking of course), if it takes 1 full second to complete a workload in DRAM, it would take 16.67 minutes to complete in flash, 12.5 days on disk. DRAM is 1000 times faster than flash. DRAM is a million times faster than disk (1000X1000).

If you don’t have a boat load of DRAM, go home.

Personally, I love what this example points out. The heartburn comes over the suggestion that an array serving I/O out of DRAM is 1000x faster than one that serves I/O out of flash. And while published benchmarks show that this advantage isn’t as extreme in real workloads, the observation regarding the sheer scope of the latency difference isn’t without merit. It's critically important for data center architects to understand that while both flash and DRAM are solid-state media, there is just no comparison between the two media types in terms of performance and latency. Pointing out the difference in raw latency capabilities of the media makes you go "Hmmm. I didn't know that."

So, the question becomes: “Why isn't an array that serves most of its I/O from DRAM orders of magnitude faster than one that serves most of its I/O from flash?” The answer is, of course, software.

There are two things that limit system performance: Hardware bottlenecks and software inefficiencies. That's not a knock of software, per se. It's just saying that at this point in history at least, software stacks like those used with storage are far too complex to streamline all the way to the speed of the hardware. That said, improving software efficiency will, invariably, get better results with the same hardware. Complementing this truth, we all know that faster hardware makes software, even inefficient software, run faster. To wit, if you install MS Word on a PC with twice the RAM and twice the processor speed, it will run a lot faster. We've all seen it. In fact, I daresay that a lot of the historical improvements in storage performance have been "Moore's Law" sorts of improvements. Any propeller head knows that adding RAM or faster processors to a system is the easiest way to improve overall system performance. Faster hardware matters.

We're not the only ones who are talking about the interaction of software and hardware, by the way. During its VNX2 launch last year, EMC touted rewriting its FLARE O/S (to the tune of two million lines of new code), to better exploit multi-core processors, and to do a better job caching the right content to flash. They said (and I paraphrase) that now that they're on the path to something that looks more like SMP (they had been dedicating particular tasks to particular cores previously), and using flash better, they're just getting warmed up. They have headroom on the hardware to do even better.

But why does a similarly priced Oracle ZFS Storage Appliance can support 3x-5x more VMs than VNX 5400 (based on their numbers mentioned at the “VNX2” launch, which assume 50 IOPs per VM)?

1. Our media is much, much faster than the flash disks EMC relies on for performance.

2. We've had systems that deliver 70-90% of I/O from DRAM since 2008 with an SMP OS. EMC just found Dynamic Multicore Optimization last year. Our kernel O/S (Solaris) has been running SMP since 1991. Our software is simply more efficient than theirs.

Also, when you apply EMC's "headroom" logic, our headroom is a lot higher than theirs because our media is a lot faster. It's important to think about this when choosing your hardware. The long-term potential of DRAM-centric storage is far greater than any of the various flash-based offerings being marketed today. And, there may even be a new storage category defined by Oracle ZFS Storage Appliances – “memory storage” as Wikibon calls it, distinct from all-flash arrays and conventional disk-centric arrays.

It’s flattering to see EMC adopting the architecture we pioneered in 2008, but the scope of difference in media speeds doesn't lie. Unless EMC starts using a large DRAM cache, automatically populated in real time, it’s hard to believe they’ll be able to catch up on price/performance with Oracle ZFS Storage.

Comments:

Post a Comment:
  • HTML Syntax: NOT allowed
About

In this blog, we'll track trends in how storage is deployed in modern data centers, and explain how our mighty Oracle ZFS Storage intersects with, and in many cases lays waste to, these trends.

Search

Categories
Archives
« April 2015
SunMonTueWedThuFriSat
   
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
  
       
Today