Oracle Posts SPEC SFS Benchmark and Crushes Netapp Comparables

Oracle posted another shot across the bow of Netapp.  In Oct 2011 Oracle posted impressive SPC-1 benchmarks that were 2x faster and half the cost of netapp.  Now those customers looking for proof of ZFSSA's superior performance and cost have another benchmark to compare.

Why are we posting now?
For a long time the old Sun Engineering regime refused to post spec.org SFS results stating the problems with the benchmark which are true.  However some customers refused to even look at the Oracle ZFS Storage Appliance because of the lack of benchmark postings.  Our competitors like netapp and emc would use it as some sort of proof that we must perform poorly.  

But Netapp and EMC have other much larger configs that are much faster?
I should point out netapp and emc both have much larger benchmark posts to SPEC SFS, but they are ridiculous configurations that almost no customers would run and further more would be willing to pay for.  Most customers that purchase NAS to run NFS purchase many smaller 2 node HA clusters versus a 20 million dollar 24 node nas cluster.  I tried to compare and include EMC in this comparison but soon realized it was worthless in that their closest post used a celerra gateway in front of a 4 engine vmax.  The list price for that would be off the charts so I considered it not valuable for this comparison.  My goal was to get a good view of comparable systems that customers might consider for a performance oriented NAS box using NFS.  

Price Matters!
One of the major downsides of the SPEC SFS results is that they don't force vendors to post prices for customers to easily consider competitors like SPC does.  Obviously every customer wants great performance but price is always a major factor as well.  Therefore I have included the list prices as best I could figure them.  For the Netapp prices I used the following price sheet I easily found on google.  When comparing performance oriented storage customers should be comparing $/ops versus $/GB. 

Lets look at the results at a high level

 Storage System
SPEC SFS Result ops/sec (Higher is Better)
Peak Response Time (Lower is Better)  Overall Response time (Lower is Better)  # of Disks Exported TB  Estimated List Price  $/OPS
Oracle 7320 134140  2.5  1.51  136  36.96 $184,840  $1.38 
Netapp 3270 101183 4.3 1.66 360 110.08 $1,089,785 $10.77
Netapp 3160 60507 3.5 1.58 56 10.34 $258,043 $4.26

Umm, Why is the ZFSSA so much more efficient?
In a nutshell its superior engineering and the use of technologies such as the Hybrid Storage Pool (HSP) in the ZFS Storage Appliance.  The HSP extends flash technology not only to read cache but also write cache.  

The 3160 result includes the use of Netapp PAM Read flash cards.  I am not sure why a year later they didn't include them in the 3270 test if they improve performance so much?  Maybe they will post another netapp result with them now?

What now?
Now Oracle ZFSSA Engineering has posted results that again blow's away Netapp and prove our engineering is outstanding.  It makes sense that we would have an edge, when you consider that the NFS protocol itself was invented at SUN. Netapp has yet to respond with a new SPC-1 benchmark that is comparable.  I know some netapp bloggers were looking for us to post spec sfs results and thought we never would and therefore said our performance must be poor.  Now we have posted impressive results and there will be more to come, stay tuned.  As one famous blogger has said the proof is in the pudding.

Comments:

Hi, D from NetApp here.

Congratulations on submitting SPEC SFS results.

I noticed ZFS was set to RAID0 - why not RAIDZ2? (your chart shows mirroring but the SPEC full disclosure shows RAID0 - something needs editing).

You seem to have listed, of the many NetApp results, one of the smallest :)

The 6240 did 190,000 SPEC SFS IOPS at 1.17ms ORT. Way more IOPS than the recent Oracle result and far better latency… you probably want to include in your chart. And that’s not even the fastest NetApp system (that title is reserved for the 6280 that can take up to 16TB cache).

http://wp.me/p1k3XN-2U

The cluster-mode NetApp systems went to 1.5 million.

http://wp.me/p1k3XN-52

But good to see Oracle is now participating.

D

Posted by Dimitris Krekoukias on February 23, 2012 at 04:08 PM MST #

Hi Dimitris,
Thanks for commenting. I purposefully left out (3rd paragraph) your 6240 and cluster submissions, because there is no point in comparing them to a 7320. For example, if I were proposing a 7320 to a particular customer, you would never position your 6000 series as a competitor.

I am posting what I feel is relevant to most customers based on our 7320 submission. Just how many customers have bought your 1.5 million OPS config that make that submission so important and relevant? I would bet close to zero.

Now with that said, there may be a future date where Oracle Engineering does post a larger 7420 type config more inline with the 6000 series. Then we can compare larger units like your 6240 submission.

-dz

PS: Also where is your updated SPC-1 benchmark??? When are you going to update that? :)

Posted by dariuszang on February 23, 2012 at 05:09 PM MST #

Actually, the NetApp 6080 results are on the SPEC website, right next to our 7320. Check out this link...
http://www.spec.org/sfs2008/results/sfs2008nfs.html

Note how the NetApp 6080, with 324 disk drives, is still slower then the Oracle 7320 with only 136 disk drives.

Posted by guest on February 24, 2012 at 08:47 AM MST #

The diagram shows 1.6Tb of ZIL and 5Tb of L2ARC

The 7320 report shows
73 x 8 = 584Gb of raw ZIL and
512 x 8 = 4096Gb of L2ARC

What am I missing?

-johnj

Posted by john McLaughlin on February 24, 2012 at 04:54 PM MST #

Hi John,
That diagram is just showing the HSP technology at a high level. Not an exact visio layout of this benchmark setup. :)

Thanks,
Darius

Posted by Darius Zanganeh on February 24, 2012 at 05:00 PM MST #

Care to comment on the previous position held by Oracle that SPEC SFS is a nonsense benchmark? https://blogs.oracle.com/bmc/entry/eulogy_for_a_benchmark

Posted by Adam Leventhal on March 01, 2012 at 08:20 AM MST #

Adam,
Thanks for taking the time from your busy schedule to post a comment here. My second paragraph eludes to exactly that post. I didn't call it out because I didn't want potential customers to get lost in the arguments between the Netapp bloggers and Bryan. That is why I pointed out most of the enormous configs of EMC and Netapp being unrealistic and also pointed out that price matters. But customers and other vendors would hold it against us and keep us from even getting a seat at the table. This simple 7320 benchmark was just a way to prove out the architecture and advanced software in the ZFSSA that you helped create! Now Netapp and EMC are losing excuses daily on why their long term customers shouldn't consider or even switch to the ZFSSA for their primary storage needs. There have always been many reasons to switch (Dtrace Visual Analytics, Raw Performance, Hybrid Storage Pool, Cost Savings, etc), posting this benchmark and others to come simply helps remove any FUD a potential customer may have gotten from a competitor. It also makes it harder for the competitors to use the benchmark in their favor as we are clearly more efficient per disk.

Posted by Darius Zanganeh on March 01, 2012 at 08:56 AM MST #

Post a Comment:
  • HTML Syntax: NOT allowed
About

Various information about Oracle Storage.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today