7420 SPEC SFS Torches EMC/Isilon, Netapp, HDS Comparables


Another SPECsfs submission, and another confirmation that ZFSSA is a force to be reckoned with in the NFS world. The Oracle ZFSSA continues to astound with its performance benchmark numbers. Today Oracle posted the anticipated SPECsfs benchmark numbers for the 7420 that simply leave you wondering HOW?  How is Oracle technology so much faster, cost effective and efficient than the competition? I say efficient because Oracle continues to post impressive performance benchmarks surpassing competitor’s multi-million dollar configurations with 2-5X lower price points.


For this comparison, I grabbed the top 2-node 6240 Netapp cluster, a recent Hitachi/Bluearc submission as well as a 28 node SSD Isilon cluster (not really realistic). The Netapp is fairly close in terms of number of drives and number of controllers which makes it a good comparison. However, the ZFSSA provides 40% more performance than the 6240 at a $700,000 lower price point!!  This doesn't even factor in maintenance costs, extra software licensing and its lack of DTrace Analytics.  I will demo these in an upcoming post. 

Another interesting point if you dig into the details, during NetApp’s max 190k IOPS their latency is 3.6ms , on the other hand the ZFSSA has only 1.7ms latency for 202k IOPS.  That means with the same 190k IOPS workload Oracle would respond over 2x faster!  I threw Isilon in the mix because they refuse to post to SPC2 even though they are usually purchased for high bandwidth applications. They are over $2 Million dollars more than the ZFSSA for still posting lower performance. Hitachi is included due to market perception they have a competitive NFS system for performance.

All the links below on the respective system name will take you to the SPEC.org detailed summary.  Including hardware setup and raid type etc..  The 7420 in this case was mirrored.

 Storage System
SPEC SFS Result ops/sec (Higher is Better)
Peak Response Time (Lower is Better)  Overall Response time (Lower is Better)  # of Disks Exported TB  Estimated List Price  $/IOPS
Oracle 7420 267928  3.1  1.31  280  36.32 $430,332 $ 1.61
Netapp 6240 - 4n 260388 4.8 1.53 288 48 $1,606,048 $ 6.17
Isilon S200 - 28n 230782 7.8 3.20 672 172.3 $2,453,708 $ 10.63
Netapp 6240 190675 3.6 1.17 288 85.8 $1,178,868 $ 6.18
Hitachi 3090-G2 189994 9.5 2.08 368 36.95 ? ?
Netapp 3270 101183 4.3 1.66 360 110.08 $1,089,785 $10.77

Fast Databases, Servers, Sailboats and NFS Storage Systems

So not only do we have some of the fastest sailboats in the world, we also have some of the fastest NFS Storage Systems as well.  You could say we have a Storage benchmark Trifecta.  We have some of the top benchmarks in SPECsfs, SPC1 and SPC2.  Take a ZFSSA for a test ride today, improve performance and lower your storage costs.


Was this benchmark made again without raid protection?

Posted by guest on April 20, 2012 at 06:59 AM MDT #

The raid type is listed both in the blog and on the SPEC.org link. It is ZFS mirrored with this 7420 config. This is the first time the 7420 has been benchmarked with SPECsfs.

Posted by Darius Zanganeh on April 20, 2012 at 07:19 AM MDT #

What are sources for estimated list prices?

Posted by guest on April 20, 2012 at 08:08 AM MDT #

The price sources are based on information from http://www.ideasinternational.com/ and other google search related price lists available. They do not include any support costs either. I would love to add the hitchi list price if anyone has access to it. Simply get the Bill of Materials off the SPEC posting.

Posted by Darius Zanganeh on April 20, 2012 at 08:22 AM MDT #

It seems pretty clear that the Oracle system gets its I/O numbers up purely due to the 16 Solid State drives included for data caching, along with a RAID 1 configuration, which is highly inefficient it terms of getting actual useable TB per $ vs. NetApp's RAID-DP. It seems that you have misrepresented the amount of usable space on the two NetApp systems listed above. The SPEC datasheets list 95+ TB, 97+ Tb, and 125+ TB usable on each of the NetApp systems vs 36+ TB for the Oracle. NetApp will soon have the ability to use solid state drives as cache as well. It would be interesting to run a like for like disk test on these two systems to see how they stack up along with offical as tested pricing.

Posted by guest on April 23, 2012 at 10:28 AM MDT #

Why is everyone that posts anonymous? Are you all from netapp? lol.

Anyhow, I am afraid you are mistaken and do not understand the architecture of the ZFSSA. There is more too it then just 16 flash drives, if you only knew, lol. You seemed to ignore that we have 80 cores of westmere processing power and 2TB of DRAM CACHE which is magnitudes faster then any flash you can ever put in any system. We also have an OS that can take full advantage of all this horsepower. I like to say we are one of the few storage vendors that actually massivily benefit from moores law. Other vendors like Netapp use pathetically smaller CPUs and numbers of cores. I am not sure why this is??? I can only guess its either A. They like the fat margins of keeping their customers running on slower older technology or B. Their OS maybe cannot take advantage of all the cores available today?

Also the spec.org data sheets clearly list exported capacity for the 2node netapp as 85TB and 48TB for the 4node netapp. i do not see 95, 97 or 125 anywhere and I have links on the post to those exact sheets.

Furthermore, for roughly another $130k +/- list we can exceed the usable TB of the netapp 2 node. Their is still a VERY significant price gap for older slower technology whether you are measuring $/GB or $/IOPS. Either way Oracle wins by a very healthy margin.

Posted by Darius Zanganeh on April 23, 2012 at 10:51 AM MDT #

“All 32 file systems are mounted and accessed by each client and evenly divided amongst all network paths to the 7420 controllers. The file systems data was evenly distributed on the backend”. That my friends is NOT scale-out :)

82TBs of capacity – 36TB exported. You get to use less than 1/2 of what you pay for. Nice.

Why would you think a 28 node Isilon cluster unrealistic?

Posted by guest on April 23, 2012 at 04:58 PM MDT #

With or without SSDs, 15k drives, more memory, more cpu etc. The price difference speaks for itself. Why would you pay more but get less?

You just simply get more bang for the buck if you choose Oracle 7420. So why would anyone spend more than double the amount of cash for the other vendors stuff.

Posted by guest on April 24, 2012 at 07:08 AM MDT #

To Eric Seidman, (EMC/Isilon Marketing manager) regarding your scale-out comment.

First off this test has nothing to do with scale out. Where in my post did I say the 7420 scales horizontally? This is a test to compare the same workload on multiple vendors rigs for comparison's, some scale horizontal to get performance others do not need to. More importantly what is the cost $/IOPS. Sure you can add more nodes to an Isilon cluster and match and even beat a 2-node 7420. But what is the point? The price is so far out of reality? Why would a customer purchase a huge expensive isilon SSD cluster for 10x the cost of the 7420 when they can get excellent performance from a 7420 for a FRACTION of the cost. How many Isilon 15k nodes would it take to hit 267k IOPS on SPECsfs???

Regarding the "1/2 of what you pay for" comment... Umm your completely wrong and do not understand all the capabilities of the ZFSSA. You see the ZFSSA has something I believe the Isilon cluster has zero of. Advanced Compression technologies and the CPU/Memory to handle it! With our lowest level of compression average customers get 2-3x compression with near zero cpu overhead on the storage procs(we have 4 built in levels). With Hybrid Columnar compression on an Oracle Database customers see anywhere from 10x - 70x compression. That is a compression level you will never see on an Oracle Database, because your not Oracle Storage and ONLY Oracle storage supports HCC.

Lastly what about SPC2. Why doesn't EMC/Isilon post SPC1 and SPC2 benchmarks. I have read the old posts from Chuck about ridiculous sized boxes on SPC1 but then EMC turns around and does the same thing SPECsfs. Give me a break...

So my friend, enjoy your day.

Posted by Darius Zanganeh on April 24, 2012 at 07:40 AM MDT #

Interesting and impressive numbers. It would be even more interesting if the zfs setup used the same double parity config as the netapp. We all know that raid1 zfs setups are much faster than raidz2 setups, so I suspect you need many more disks to achieve the same iops numbers as the raid1 system. Could you elaborate on this?

Posted by guest on May 08, 2012 at 10:52 AM MDT #

Nice and helpful blog post,
is it possible to compare about greenness?
any comparison about performance per watt?

Posted by Mehmet Uluer on May 17, 2012 at 01:47 AM MDT #

Regarding testing with RAIDZ2. The point of a benchmark is to bring your best foot forward. With ZFSSA this happens to be mirrored. You could run the tests with RAIDZ2 but why? It wouldn't be the fastest possible. Your argument would be well I get less usable space. That is true for raw. But don't forget ZFSSA compression. Many customers get 2-3x compression with our lowest level LZJB which has very minor CPU overhead. I am not sure if the SPECsfs data is compressible even? But I can tell you in a recent SAS test I ran, we averaged 4x compression and it IMPROVED performance. We now had less data being read and written from disk. Disk utilization was lower with compression and overall CPU was still very low. Also regarding our need to add LOTS of disk, you are missing one key component of any NAS/SAN: CACHE. The ZFSSA has some of the largest most cost effective CACHE's in the industry, its no wonder we are so fast. So compression combined with the Hybrid Storage Pool and our cache requires no need for RAIDz2 in my opinion.

Posted by Darius Zanganeh on May 17, 2012 at 07:32 AM MDT #

Yes we can get you these and I would guess we are much greener then these much larger boxes in my comparison. If your in a hurry send me an email darius dot zanganeh at oracle offline and I will connect you with a peer of mine local too you if possible.

Posted by Darius Zanganeh on May 17, 2012 at 07:37 AM MDT #

It looks like ZFS blows away all other file systems pretty easily

Posted by guest on May 21, 2012 at 09:54 AM MDT #

Not the ZFS file system, but rather the ZFS Storage Appliance. There is a big difference between the ZFS file system and the ZFS Storage Appliance which uses the ZFS File System under the covers.

Posted by Darius Zanganeh on May 21, 2012 at 10:15 AM MDT #

-Disclosure NetApp Employee -

Firstly I'm 99% sure that the pro-NetApp commenters aren't NetApp employees, we're pretty clear on our guidelines about posting on blogs and being clear about your affiliation.

Its clear Oracle has done a good job with the new array and anyone familiar with the Spec benchmark knows that for the most part these days it is gated by CPU not disk due to the availability of solid state in various forms and because a large number of the IOPS in the spec workload are metadata calls (getattr, lookup etc). As a result it does a good job measuring how efficiently you can throw CPU's at a workload.

EMC did a stellar job with this in their early VNX submissions getting some great IOPS/Core/Ghz numbers, and you've done some great work pulling together something that harnesses 80 CPU cores to come up with a really solid number.

I have some personal reservations about the way you've presented those numbers, but its your blog, and that's your prerogative. Your benchmark shows you can get a great number, and you're willing to do that at a keen price, nobody can take that away from you.

It would be remiss of me however not to point out that the NetApp submission

1. Is for a scale-out config using worst case datapaths
2. Uses less than half the number of CPU cores
3. Has Less than a tenth of the amount of DRAM
4. And has Less than half the amount of Solid State

Given that Street prices are generally based on a markup from cost of goods, and we all pay pretty much the same money for our components, I'll let the reader make up their minds which of these configs would actually cost them less in a competitive bidding situation.

John Martin
Principal Technologist - NetApp ANZ

Posted by guest on December 03, 2012 at 06:51 PM MST #

Hi John,
One thing to consider is that I list 3 different netapp submissions. 2 of which are NOT scale out but rather normal 2 node netapp ONTAP mode 7 clusters which most netapp customers are running.

Since we all pay the same for silicon as you mention. Why is that netapp doesn't add more CPU to their boxes? Is it possibly that the code will not properly use 80 cores like Solaris and ZFS can? https://communities.netapp.com/message/49148

The lastest Netapp FAS 3250 has only 16 cores per cluster versus an Oracle 7420 with 80 cores. The new 3250 has a pathetic SPEC benchmark. http://www.spec.org/sfs2008/results/res2012q4/sfs2008-20121015-00216.html

Posted by Darius Zanganeh on December 05, 2012 at 03:10 PM MST #

I misread your statement when you said "For this comparison, I grabbed the top 2-node 6240 Netapp cluster. For me a NetApp cluster is something which runs Clustered ONTAP. It appears you were really comparing to the 6240 controller-pair running 7-mode.

That changes my figures a little, so for a Oracle System to get roughly 40% better performance than a NetApp 6240 it requires ..

1. More than four times as many CPU cores
2. More than twenty times as much DRAM
3. More than four times as much Solid State memory

As to why we don’t use more CPU cores ? To put it simply, it's because we don’t have to. I'm kind of surprised that you'd invite me to point out how many cores the Oracle configuration has in comparison. Remember you not only need to buy these machines, you also have to give them power and cool them. Efficiency matters to customers, it also matters to NetApp,

Because efficiency matters we have a broad range of controllers many of which we benchmark including the mid-range 3250. Not sure about Oracle, as there are only 2 benchmarks both of which seem to be for your most powerful scale-up systems which max out at about 250K IOPS. From my perspective If you're going to crow about a SPEC hero-number, you really need to be looking at a scale-out configuration like NetApp, Isilon, Avere and Huawei did.

Finally, If you're really interested in how ONTAP's CSMP CPU allocation algorithm works, and how that lets us use multiple CPU cores so efficiently, let me know and I'll write up a blog, the communities post you pointed to is a kind of confusing way of getting your head around the subject


Posted by guest on December 05, 2012 at 06:37 PM MST #

First off, if you read any of the official Oracle PR releases on this benchmark we specifically call out the 3270 benchmark in which we are greater then 2.5x performance and now we are still that much faster then the NEW shiny 3250. I included those bigger boxes in my blog because we performed so well simply for reference. If a customer is looking for new midrange storage like a FAS 3250 or FAS 3270 which is what the vast majority of storage sold by netapp, they should strongly consider the Oracle 7420. 2.5x performance at 1/2 the cost.

This is almost comical. The netapp 3270 has almost 360 drives versus our 280 drives. Those drives will use much more power and cooling then my two efficient 7420 controllers. On the 3250 you used 336 drives. It is obvious your slower, more expensive and use more power. On your newest benchmark your 3250 has even WORSE latency at 5.6ms. Intel cores and DRAM use a lot less power then a bunch of spinning disk. If your so good at using them, then why not add more and increase your performance?

Now only scale out matters on this benchmark? LOL
You have only had cluster mode for a few years and what percentage of your customers actually use it??? LOL In the past netapp always bragged about their SPECsfs benchmarks and even asked Oracle and EMC where are yours? We have posted ours and we crush netapp for 1/2 the cost!

Now what about other benchmarks? Like SPC1 and SPC2. You havent updated the SPC1 for the 3250. Why not?? You have never tested SPC2, why not?

Intel cores and DRAM use a lot less power then a bunch of spinning disk. If your so good at using them, then why not add more and increase your performance? Maybe because it will hurt your massive margins your charging your customers. Oracle's ZFS Storage Appliance continues to take MANY customers from netapp because of many factors, performance, price, analytics etc...

Interestingly I took a look at the stock charts from Oracle and Netapp as well as a few other big storage vendors and its very interesting... https://blogs.oracle.com/si/resource/stock.JPG There is 30% GAP in favor of Oracle over Netapp since I posted this blog post last April 18th. I know there are many other factors involved in these stock prices, but its still very interesting none the less.

Posted by Darius Zanganeh on December 06, 2012 at 11:50 AM MST #

Thanks Darius, its nice to know exactly what we're comparing this to. I didn't read the press releases, nor was I replying to that release, I was replying to your post which was primarily a comparison to the FAS6240.

If you do want to compare the 7420 to the 3270, then I'll amend the figures once again .. to get a 240% better result you used a box with

1. More than eleven times as many CPU cores
2. More than one hundred and sixty times as much memory

I really wish you'd also published Power Consumption figures too :-)

Regarding disk efficiency, we've already demonstrated our cache effectiveness. On a previous 3160 benchmark, we used a modest amount of extended cache and reduced the number of drives required by 75%. By way of comparison to give about 1080 IOPS/15K Spindle we implemented a cache that was 7.6% of the capacity of the fileset. The Oracle benchmark got about 956 IOPS/drive with a cache size about 22% of the fileset size.

The 3250 benchmark on the other hand, wasn’t done to demonstrate cache efficiency, it was done to allow a comparison to the old 3270. It's also worth noting that the 3250 is not a replacement for the 3270, it's a replacement for the 3240 with around 70% more performance. Every benchmark we do is generally done to create a fairly specific proof point, in the case of the 3250 benchmark it shows that it has almost identical performance as the 3270 for a controller that sells at a much lower price point.

We might pick one of our controllers and do a "here's a set config and here's the performance across every known benchmark" the way Oracle seems to have done with the 7420. It might be kind of interesting, but I'm not sure what it would prove. Personally I'd like to see all the vendors including NetApp do way more benchmarking of all their models, but it's a time-consuming and expensive thing to do, and as you've already demonstrated, its easy to draw some pretty odd conclusions from them. We'll do more benchmarking in the future, you'll just have to wait to see the results :-)

Going forward, I think non-scale out benchmark configs will still be valid to demonstrate stuff like model replacement equivalency, and cache efficiency, but I'll say it again, if you're after "my number is the biggest" hero number bragging rights, scale out is the only game in town. But scale-out isn't just about hero-numbers, for customers to rapidly scale without disruption as needs change, scale-out is an elegant and efficient solution and they need to know they can do that predicably and reliably. That's why you see the benchmark series like the ones done by NetApp and Isilon. Even though scale-out NFS is a relatively small market, and Clustered-ONTAP has a good presence in that market, scale-out Unified storage has much broader appeal and is doing really well for us. I cant disclose numbers, but based on the information I have, I wouldn’t be surprised if the number of new clusters sold since March exceeds the number of Oracle 7420s sold in the same period, either way I'm happy with the sales of Clustered-ONTAP.

As technology blogger, its probably worth pointing out that stock charts are a REALLY poor proxy for technology comparisons, but if you want to go there, you should also look at stuff like P/E multiples (an indication of how fast the market expects you to grow), and market share numbers. If you've got Oracle's storage revenue and profitability figures hand for use to do a side by side comparison to the NetApp published financial reports, post them on up, personally I would LOVE to see a comparison. Then again, maybe your readers would prefer us to stick to talking about the merits of our technology and how that can help them solve the problems they've got.

In closing, while this has been fun, I don’t have a lot more time to spend on this. I have expressed my concerns at the amount of hardware you had to throw at the solution to achieve your benchmark results, and the leeway that gives you to be competitive with street pricing, but as I said initially your benchmark shows you can get a great scale-up number, and you're willing to do that at a keen list price, nobody can take that away from you, kudos to you and your team.

Best Regards

Posted by John Martin on December 08, 2012 at 05:40 PM MST #

I like ZFS and WAFL as file systems and dont really have a favorite.
the $/GB and $/IOPS is what customers care and agree ZFSA has more price advantage.
But there are one or two things which ZFS as a filesystem need to urgently work on, specially since the fact that ZFS has been around since solaris 10, if i recollect correctly since 2003/4.
The main one is defragmentation and the performance hit as the pool starts filling up above 80%. I know you could argue that since the total cost is substantially lower why keep some 20% space as spare in the Zpool. But from a customer stand point wasted space is wasted space..you dont always do the cost/GB math when you see that 20% is set aside for performance reasons.

Posted by martinfk on February 22, 2013 at 08:32 AM MST #

Its always a good idea to not let any file system get 100% full. In addition to your comments about the total cost I would also add that we tell our customers to turn on LZJB compression by default for pretty much every workload out there including production databases and virtualization etc.. This usually yields a significant amount of extra usable space which adds even more value to the $/GB equation.

Posted by Darius Zanganeh on February 22, 2013 at 08:59 AM MST #

I noticed that Oracle has put out a document in March which rehashes some of the stuff you've put up here. It's inaccurate again (specifically on the size of I/Os we write to disks for block workloads, plus a few other assertions), I'd be happy to help you correct the those if you're interested.

Also for the sake of completeness, would you please approve the last reply I sent back in December ? If you cant find it, I put a copy of it on my blog here http://storagewithoutborders.com/2013/02/15/unfinished-business/


Posted by John Martin on April 10, 2013 at 05:25 PM MDT #

I apologize for not posting your previous response, it is now posted. Get me a netapp power calculator and I will post the power usage of all the entries. I am curious what you think is wrong with more CPU and CACHE for much less cost???

Lets stick to the relevant facts and forget about stocks:
- Oracle has 2.5x Performance for 1/2 the cost of a Netapp.
- Why wouldn't a customer want more CPU and Cache? You act like it is a bad thing? That is silly? Do you really think the power consumption is going to be that different? Maybe your right and I would love to post the results, just get me a netapp power calc so I can compare the SPEC posted systems. Our power calc is public and posted. I couldn't find one posted by netapp publicly?
- You say the Netapp cache is SO efficient and you talk about an old non relevant 3160 SPEC SFS post. You fail to mention that the 3270 gets a MEASLY 281 IOPS per drive and that the 3250 gets a whopping 300 IOPS per drive. So your point is that the 3250 was done to compare with the 3270? What was the 3270 done for? I thought the purpose of a benchmark was to compare many vendors systems against each other with the workload remaining consistent? With that in mind the Oracle 7420 still crushes the netapp in price, efficiency and performance. I am guessing we are also still better or comparable in power usage as well.

Posted by Darius Zanganeh on April 10, 2013 at 06:22 PM MDT #

Thannks for posting the reply, again, I think you're missing my point

DZ … "Oracle has 2.5x Performance for 1/2 the cost of a Netapp"

A more accurate statement is that "The Oracle 7420 attained a benchmark result that was 2.5x better for 1/2 the _List Price_ of a NetApp 3270 array in 2011 that had significantly less hardware.

What this says is that Oracles List Price is a significantly lower than NetApp's List Price. You could say the same thing about the difference between the price of a Hyundai i30 and an Audi A3.

Secondarily, as I pointed out in previous comments the rest of the results also says that the Oracle solution make relatively inefficient usage of CPU and Memory when compared to a NetApp system that achieves similar performance.

Yes the list price of the NetApp system is significantly higher than an equivalently performing 7420, but this is a marketing and pricing issue, not a technical one. In general I like to stick to technical merits, because pricing is a fickle thing that can be adjusted at the stroke of a pen, technology requires a lot more work to get things right.

In the end, how this list price differentiation translates into what these solutions will actually cost a customer is highly debatable. I do a LOT of research into street prices as part of my job, and in general storage is increasingly purchased as part of an overall upgrade and this is where the issues get murky very quickly as margins are moved around various components within the infrastructure to subsidise discounting in other areas. Having said that, I will let you in on something, based on the data I have, for many quarters, in terms of average $/RAW TB paid by customers in my market, Oracle customers paid about 25% MORE for V7000 storage than paid by Netapp customers for storage on FAS32xx and that only recently did Oracle begin to reach pricing parity with NetApp. We could argue the ways the analyst arrived at those figures, but from my analysis the trend is clear across almost all vendors and array families vis. The correlation between customer $/TB is strongly correlated with the implied manufacturing costs, and very poorly correlated with the vendor list prices. The main exceptions to this are new product introductions when there is a compelling new and unique value propositions (e.g. DataDomain) or when vendors buy business at very low or even negative margin in order to seed the market (e.g. XIV in the early days)

Now personally, I disagree with NetApp's list pricing policy, however there are reasons why that list price is so much higher than the actual street price most people pay. Many of those reaons have to do with boring things like long term pricing contracts. If you'd like to turn this into a marketing discussion around pricing strategies, I'm cool with that, but I don't think the people that read either of our blogs are overly interested. However I will say this again, the price people pay in the end, has more to do with the costs of manufacture, and a solution that gets more performance out of less hardware will generally cost the customer less, especially if the operational expenses are lower.

DZ .. "Why wouldn't a customer want more CPU and Cache?"

Why would someone want less CPU or Cache ? ... because it costs them less, either in street pricing terms, or in the cost of powering or cooling them. And yes, I believe that that a 7420 controller with more than eleven times as many CPU cores, and more than one hundred and sixty times as much DRAM will chew a lot more power and cooling than a 3270 controller.

It's not just the cost of the electricity (carbon footprint and green ethics aside), its also the opportunity cost of using that power for something else. Data centers have finite resources for power and many (most) are very close to the point where you cant add more systems. In those environments, Power hungry systems that aren't running business generating applications are not viewed kindly.

JM Interpretaiton of DZ … "Happy to do a power consumption comparison, where is the netapp information ?"

I've answered that in a simlar question to me on my blog at storagewithoutborders.com - See the blog-post URL in a previous comment re getting access to power consumption figures.

DZ .. You say the Netapp cache is SO efficient and you talk about an old non relevant 3160 SPEC SFS pos

I referenced the "non relevant 3160 SPEC SFS post" because it is relevant being the place where we tested the same controller with a combination of flash acceleration, no flash acceleration, with both SATA, and FC/SAS spindles. The specific one I referenced was the most comparable configuration that includes flash and a 300GB 15K disks which as I pointed out achieved 1080 IOPS/15K Spindle with a cache that was 7.6% of the fileset size.

If you prefer I could have used the more recent (though still old) 6240 dual node config which uses 450GB 15K disks and achieved cache that achieved 662 IOPS per drive but with a cahce that was a mere 4.5% of the fileset size, or the 24 Node 6240 config which achieved 875 IOPS / drive with a cache that was 7.6% of the fileset size. As you can see a modest amount of flash improves the IOPS/disk enormously, and there is a good correlation between more flash as a percentage of the working sets and better results in terms of IOPS/Disk. Before you ask, as far as I can tell, the main reason for the difference in IOPS/spindle between the 24 Nodes 6240 and the old 3160 with a similar cache size as a percentage of the fileset, is that our scale-out benchmark used worst case paths to from the client to the data to provide a squeaky clean implementation of SPEC's uniform access rule.

DZ .. "You fail to mention that the 3270 gets a MEASLY 281 IOPS per drive and that the 3250 gets a whopping 300 IOPS per drive. So your point is that the 3250 was done to compare with the 3270? What was the 3270 done for?"

Neither the 3270, nor the 3250 benchmarks used flashcache, so the IOPS/spindle are going to be good, but not stellar. I don’t know exactly why we didn’t use flash in the old 3270 benchmark maybe its because SPEC-SFS is a better indication of CPU and metadata handling than it is about reads and writes to disk, and like I said, we'd already proved the effectiveness of our flash based caching with the series of 3160 benchmarks.

Going in to the future, I doubt we'll do another primary benchmark without flash, but its worth saying again, that the 3250 was done to show performance equivalency with the 3270, so we kept the configuration as close to identical as we could, and that meant neither the 3270 or the 3250 benchmarks used Flash to improve the IOPS/disk. If we had done it, I have every reason to believe that the results would have been in line with the 3160 and 6240 benchmarks referenced above.

DZ .. "I thought the purpose of a benchmark was to compare many vendors systems against each other with the workload remaining consistent?"

We tend to use benchmarks as ways of demonstrating how much a netapp technology has improved against a previous netapp baseline, to help our customers make good purchasing decisions. Proving we're better than someone else is not a primary consideration, though often that is a secondary effect. Oracle is free to use their benchmarks in any way they choose, personally I'd LOVE to see a range of configurations from each technology benchmarked rather than just sweetspots, maybe opensfs and netmist will bring this about, but the fact is, running open, verifiable, and fairly comparable benchmarks is expensive and timeconsuming and I will probably never see enough good engineering data published. If you've got some ideas to simplify this, I'd love to work with you on this (seriously, we might compete against each other, but we both clearly care about this stuff, not many do)

DZ .. With that in mind the Oracle 7420 still crushes the netapp in price, efficiency and performance. I am guessing we are also still better or comparable in power usage as well.

You'll see from the above that I respectfully disagree with pretty much everything in that last statement, and I'm looking forward to that contoller power useage comparison :-)

Posted by John Martin on April 11, 2013 at 02:40 AM MDT #

Hi John,
I am not sure if anyone besides us will read this far into the comments but I will reply anyways. I took the liberty of doing some googling myself to get the power comparison done.

The Oracle 7420 ZFS Storage Appliance as configured in the SPEC SFS 2008 test uses 9952 Watts at a 100% workload (meaning the CPU's are pegged at 100%.
This is publicly available at http://www.oracle.com/us/products/servers-storage/sun-power-calculators/calc/s7420-power-calculator-180618.html

The Netapp 3250 (Less power usage then the 3270) uses 17866 Watts. This number was calculated from the netapp site requirements guide located here. http://support.netapp.com/NOW/public/knowledge/docs/hardware/NetApp/site/pdf/site.pdf
Calculations used:
2 x 3250 Power supplies @ 533 Watts each equals 1066 Watts for controllers (page 75 of the netapp doc)
14 x DS4243 Disk shelves ea with 4 power supplies @ 1200 (600 for 2 of them max)equals 16800 watts!!! (page 117 of the netapp doc)

So the Oracle 7420 is not only 2.5 times faster it also consumes a little more then HALF the power of the Netapp 3250

Posted by Darius Zanganeh on April 11, 2013 at 01:00 PM MDT #

now i wonder if John ran out of arguments? or did he just need a good vacation spending $ on overpriced storage?

somehow i need to see he admits he have "lost" the battle, which i think he did alrealdy stating someting abount "Hundai I30 compared to Audi A3"

Posted by jan kaspersen on August 03, 2013 at 05:41 AM MDT #

Post a Comment:
  • HTML Syntax: NOT allowed

Various information about Oracle Storage.


« July 2016