Monday Sep 23, 2013

Using Analytics (Dtrace) to Troubleshoot a Performance Problem, Realtime

I was recently on a call with one of the largest financial retirement investment firms in the USA.  They were using a very small 7320 ZFS Storage appliance (ZFSSA) with 24GB of dram and 18 x 3TB HDDs.  This system was nothing in terms of specifications compared to the latest shipping systems, the  ZS3-2 and ZS3-4 storage systems with 512GB and 2TB of DRAM respectively.  Nevertheless I was more then happy to see what help myself and mostly the ZFSSA Analytics (Dtrace) could offer.  

The Problem: Umm, Performance aka Storage IO Latency is less then acceptable...

This customer was/is using this very small ZFSSA as an additional Data Guard target from their production Exadata.  In this case it was causing issues and pain.  Below you can see a small sample screen shot we got from their Enterprise Manager Console.

Discovery: Lets see what is going on right now! Its time to take a little Dtrace tour..

The next step was to fire up the browser and take a look at the real-time analytic's.    Our first stop on the dtrace train is to look at IOPS by protocol.  In this case the protocol is NFSv3 but we could easily see the same ting from nfsv4, smb, http, ftp, fc, iscsi etc... Quickly we see that this little box with 24GB of DRAM and 18 x 7200RPM HDD's was being pounded! Averaging 13,000 IOPS with 18 drives isn't bad.  I should note that this box had zero Read SSD drives and 2 x Write SSD drives.  

Doing a quick little calculation we see that this little system is doing about 650 IOPS per drive per second. Holy Cow SON! Wait, is that even possible? There is something else at play here, could it be the large ZFSSA cache (a measly 24GB in our case) at work?  Hold your horses, IOPS are not everything in fact you could argue that they really don't matter at all, what really matters is latency.  This is how we truly measure performance, how fast does it take to do one thing versus how many things can I do at the same time regardless of how fast they each get done.  To best understand how much effect DRAM has on latency see our World Record SPECSFS Benchmark information here.  Here you see that sure enough that some of the read latency is pathetic, for just about any application in the world.  

There are many ways to solve this problem, the average SAN/NAS vendor would tell you to simply add more disk, With dtrace we can get more granular and ask many other questions and see if there is perhaps a better or more efficient way(s) to solve this problem.

This leads us to our next stop on the dtrace discovery train, ARC accesses (Adaptive Replacement Cache).  Here we quickly find that even with our lackluster performance in terms of read latency.  We have an amazing read cache hit ratio.  Roughly about 60% of our IO is coming from our 24GB of DRAM. On this particular system the DRAM is upgradable to 144GB of DRAM.  Do ya think that would make a small dent in those 6033 data misses below?


This nicely leads into the next stop on the dtrace train which is to ask dtrace for all those 6033 data misses how many of them would be eligible to be read from L2ARC (READ SSD Drives in the Hybrid Storage Pool).  We quickly noticed that indeed they would have made a huge difference.  Sometimes 100% of the misses were eligible.  This means that after missing the soon to be upgraded 6x dram based cache the rest of the read IO's of this workload would likely be served from high performance MLC Flash SSD right in the controller itself.

Conclusion: Analytics on the ZFSSA are amazing, The Hybrid Storage Pool of the ZFSSA is amazing, the large DRAM based cache on the ZFSSA is very amazing...

At this point I recommend that they take a two phase approach to the workload.  First they upgrade the DRAM Cache 6x and add 2 x L2ARC Read SSD drives.  After that they could evaluate if they still needed to add more disk or not.

Extra Credit Stop:  One last stop I made  was to look at their NFS share settings and see if they had compression turned on like I have recommended in a previous post.  I noticed that they did not have it enabled and that CPU was very low at less then 10% AVG utilization.  I then explained to the customer how they would benefit even now without any upgrades by enabling compression and I asked if I could enable it that second at 12pm in the afternoon during a busy work day.  They trusted me and we enabled it on the fly and then for giggles I decided to see if it made a difference on disk IO and sure enough it made an immediate impact and disk IO lowered because now every new write they had was taking less disk space and less IO to move through the subsystem since we compress real-time in memory.  Remember that this system was a Data guard target so it was constantly being written too. You can clearly see in the image below how compression lowered the amount of  IO on the actual drives.  Trying doing that with your average storage array compression.


Special Thanks goes out to my teammates Michael Dautle and Hugh Shannon who helped with this adventure and capturing of the screenshots.

Tuesday Sep 10, 2013

ZFS Storage Appliance Benchmarks Destroy Netapp, EMC, Hitachi, HP etc..

Today Oracle released two new storage products and also posted two World Record Benchmarks!

First, Oracle posted a new SPC-2 throughput benchmark that is faster than anything else posted on the planet! The benchmark came in at 17,244 MBPS.  What is probably even more amazing then this result is the cost of the system to accomplish this benchmark.  Oracle by far has the lowest cost per MBPS compared to our major competitors coming in at $22.53.  IBM by contrast comes in at $131.21.   I should also note that Oracle accomplished this with less hardware then IBM.  The ZS3-4 entry uses 384 drives while the IBM DS8700 uses 480 drives.  When it comes to throughput applications the Oracle ZS3-4 beats every competitor in every category of the SPC-2 Benchmark.  You can read the details here yourself. http://www.storageperformance.org/benchmark_results_files/SPC-2/Oracle_SPC-2/B00067_Oracle_ZFS-ZS3-4/b00067_Oracle_ZFS_Storage_ZS3-4_SPC-2_executive-summary.pdf

The second benchmark posted was the SPECsfs2008 benchmark.  Full Disclosure: SPECsfs2008 is the latest version of the Standard Performance Evaluation Corporation benchmark suite measuring file server throughput and response time. Oracle did break the World Record response time measurement coming in at .70ms.  Oracle did not break the record for most operations per second, but in many ways those benchmarks are sort of silly because you would be comparing such apples and oranges.  The Oracle benchmark is based on a standard 2-node cluster with some disk and flash behind it.  The vendors with higher OPS/sec numbers have mostly enormous configs or all flash configs which are mostly irrelevant for all but a few niche workloads.  For the average NAS user the ZS3-4 config used for this benchmark is perfectly inline with what most customer purchase to run things like Oracle Databases, VMware, MS SQL etc...

The closest 2 node cluster comparable is the recent Hitachi HNAS 4100 which came in at 293,128 OPS/sec with a latency of 1.67ms.  Compare that to the 2 node ZS3-4 entry with 450,702 OPS/sec and .70ms latency.  That latency is more then twice as fast as the Hitachi's and it still blows it away in OPS/sec.  They both have very close to the same number of drives as well which is interesting.  You can read the actual results here. http://www.spec.org/sfs2008/results/res2013q3/sfs2008-20130819-00228.html.  With the new ZFS Storage Appliance hardware/software combo Oracle can clearly see the competition in the rearview mirror when it comes to performance.  Many analyst's have also recently commented on the Oracle ZFS Storage Line-up.

http://www.dcig.com/2013/09/the-era-of-application-storage-convergence-arrives.html
http://tanejagroup.com/news/blog/blog-systems-and-technology/bending-benchmarks-oracle-zooms-zfs-with-zs3-storage-appliance#.Ui-tVMbBN8G
http://www.oracle.com/us/products/servers-storage/storage/nas/esg-brief-analyst-paper-2008430.pdf
http://www.oracle.com/us/products/servers-storage/storage/nas/snapshop-report-analyst-paper-2008431.pdf
http://www.oracle.com/us/products/servers-storage/storage/nas/taneja-group-zs3-analyst-paper-2008432.pdf
http://www.oracle.com/us/products/servers-storage/storage/nas/ett-zs3-analyst-paper-2008433.pdf

Monday Jul 15, 2013

ZFS Storage Appliance Compression vs Netapp, EMC and IBM in a Real Production Environment

Dear Oracle ZFSSA Customer, Please enable LZJB compression on every file system and LUN you have by default
I have been involved in many large Proof of Concepts during my time here at Oracle and I have come to realize that there is a quite compelling and honestly very important feature in the ZFS Storage Appliance that every customer and potential customer should review and plan on using.  This feature is inline data compression for just about any storage need, including production database storage.  Many storage vendors today list and possibly tout their compression technologies but rarely do they tell you to turn it on for production storage and never would they tell you to turn it on for every File System and LUN.  This is where the ZFS Storage Appliance is somewhat unique.  Real-world experience over a wide range of production systems has shown that Oracle storage customers should run LZJB compression unless there is a compelling reason not to run compression, such as storage for uncompressible data.  Even in the case of mixed data, where some data does not compress, the benefit of LZJB compression is still strong enough and system CPU capacity is still large enough to accommodate running compression in cases where not all of the data in a share will compress.  

Why Compress Everything?
There are many reasons to compress data such as saving space and money which are great reasons to compress, but with the ZFSSA you may actually gain performance as well.  ZFS compression reduces the amount of data written to the physical disks over the back-end network channels.  Finite physical limits on disk drive bandwidth and channel bandwidth put fundamental limitations on application data transfer rates.  With compression, the amount of data written to the drives and channels is reduced compared to the amount of data written by the application, so using CPU to perform compression increases the effective drive and channel bandwidth and opens up an important bottleneck that constrains application throughput.

What does this cost in terms of performance and is there a license fee?
 In terms of performance it costs very, very little. The currently shipping ZFSSA 7420 for example has either 32 or 40 cores of Intel Westmere horsepower under the covers coupled with up to 1TB of DRAM memory and a Solaris Kernel that is exceptional at doing many things at the same time and using all the cores it can.  Usually most of our customers have CPU cores and MHz sitting around waiting for something to do.  Turning on LZJB compression by default turns out to be a pretty good win-win in terms of using a few cores, gaining performance and saving space.  There is no license fee or cost to use ZFSSA compression you simply turn it on where you want to use it. It is granular enough that you can turn it on at the file system or LUN level.

There 4 different built in levels of compression to choose from with the ZFSSA.  LZJB, GZIP-2, GZIP(GZIP-6) and GZIP-9.  A customer could easily create 4 file systems each with a different compression version and then copy the same test data to each share at different times.  You could then very easily correlate the CPU usage (Using Dtrace Analytics) versus the compression ratio and make an educated decision on which level of compression you want to use for a particular data type.  I tell my customers to at a minimum start with LZJB pretty much everywhere.  Below is a real world example I pulled from one of my customers.  Here you see a ZFS pool that basically has multiple Oracle Databases running on it.  They have LZJB turned on everywhere and are seeing a very nice 2.79x compression ratio.

The Other Guys Compression Story: 
So at this point you may be wondering what is so unique about this Oracle ZFSSA Compression vs say Netapp Data Compression, EMC VNX compression, or IBM's v7000/SVC compression?  The basic issue is that pretty much all the other big players have issues scaling CPU/threads and processes and therefore turning on compression in these environments can create a cpu bottleneck performance problem pretty fast. Lets take a minute to examine each.

Netapp Data Compression:
Netapp's own whitepapers are riddled with WARNING's on using compression in production environments (Especially Inline Compression for Oracle OLTP).  Most notably they say that Compression can chew up a lot of the meager amount of CPU/threads available.  I tried to find a detailed document that explains how Netapp uses its cores and found it next to impossible to find even for Google!  I did find a few blogs and netapp forums where people frequently mentioned something about the Kahuna Domain and that compression was part of this Domain (CPU) and shared with many other services.  Maybe someone else can share how all the cores on a netapp box are used?  It appears that if a Netapp box has over 50% CPU utilized you could get into trouble turning on compression.

"On workloads such as file services, systems with less than 50% CPU utilization have shown an increased CPU usage of ~20% for datasets that were compressible. For systems with more than 50% CPU utilization, the impact may be more significant."

"When data is read from a compressed volume, the impact on the read performance varies depending on the access patterns, the amount of compression savings on disk, and how busy the system resources are (CPU and disk). In a sample test with a 50% CPU load on the system, read throughput from a dataset with 50% compressibility showed decreased throughput of 25%. On a typical system the impact could be higher because of the additional load on the system. Typically the most impact is seen on small random reads of highly compressible data and on a system that is more than 50% CPU busy. Impact on performance will vary and should be tested before implementing in production." Reference: http://www.netapp.com/us/system/pdf-reader.aspx?m=tr-3958.pdf&cc=us

The following conclusions can be drawn based on the available evidence from industry analyst reviews as well as Netapp’s own documentation: In the best possible scenario, NetApp’s compression technology occurs asynchronously for active, transactional workloads, meaning that the data is initially written in an uncompressed form. Therefore, capacity must always be over-provisioned and later reclaimed, most likely with some degree of fragmentation, as compared to the synchronous or in-line compression architecture of the ZFS Storage Appliance. As a result, much of the potential cost-saving value of compression cannot be effectively realized as the data must always initially reside in an uncompressed state on the media.

EMC VNX Compression:
EMC is very similar to Netapp in that they give lots of warnings and say that compression is only recommended for static data.  EMC also has very small limits on the number of compressed LUN's you can have per controller.  The document that details this is a few old originally written for the Clariion CX4 but apparently still relevant for the newer controllers as it is directly linked in the newer VNX compression datasheet.

"Compression’s strength is improved capacity utilization. Therefore, compression is not recommended for active database or messaging systems, but it can successfully be applied to more static datasets like archives, clones of database, or messaging-system volumes." Reference: http://www.emc.com/collateral/hardware/white-papers/h8045-data-compression-wp.pdf http://www.emc.com/collateral/hardware/white-papers/h8198-vnx-deduplication-compression-wp.pdf
"VNX file deduplication and compression should be used exclusively for file data as more granular control is available for file data rather than block, so the system can identify inactive data to process versus active data.”
“Block data compression is intended for relatively inactive data that requires the high availability of the VNX system. Consider static data repositories or copies of active data sets that users want to keep on highly available storage."
Reference: http://www.emc.com/collateral/hardware/white-papers/h8198-vnx-deduplication-compression-wp.pdf

In conclusion, like Netapp, EMC’s compression technology is best suited for infrequently accessed data, and similarly requires post process functions to compress data; the usage cases for any data that will be accessed beyond an archival access frequency are highly limited.  Because data is initially written in the uncompressed form, compression must occur asynchronously by means of a time-consuming compression post-process. In fact, compression post-processing for newly written data often must run for time periods measured in days before the data is stored in compressed format. As a result, much of the space-saving value of compression is moot as the data must reside in an uncompressed state on the media, meaning capacity must always be over-provisioned relative to implementations that feature in-line compression such as the ZFS Storage Appliance.

IBM Real-Time Compression:

IBM is slightly different from the above in that they actually claim to support running their compression for production databases.  But as you dig into the bowels of the Redbook, you quickly find there are some severe limitation and concerns to beware of.  First off the v7000 has only 4 cores and 8GB of Cache Memory.  Pretty small versus the 32-40 cores and 1TB of Cache Memory in the Oracle ZFSSA 7420.  Apparently when you enable a single volume/lun with compression 3 of the 4 available cores and 2GB or the 8GB of memory now become dedicated solely to compression operations.  So with IBM v7000 if you turn on compression then kiss away 75% of your storage CPU resources on the particular node/controller this lun lives on.  IBM goes on and affirms this by saying that if you have more then 25% CPU used before compression is enabled then you should not turn it on.  They also have a hard limit of 200 volumes compressed within a 2-node I/O group.  They also say not to mix compressed and uncompressed luns/volumes in the same storage pool.  Probably one of the least desired aspects of the IBM compression is cost $$$.  IBM is the only vendor here that charges extra for compression.  With SVC they charge per TB and with the v7000 they charge per enclosure so either way, every time you had disk to a I/O Group/Controller Pair that has compression enabled you are going to be hit with more software licensing as well.  Another major SVC and v7000 viability concern arises for customers that would like to exploit IBM’s Easy Tier auto-tiering technology in addition to Real-Time Compression, because RTC and Easy Tier are currently mutually exclusive: Easy Tier is automatically disabled for compressed volumes and cannot be enabled.

"An I/O Group that is servicing at least one compressed volume dedicates certain processor and memory resources for exclusive use by the compression engine."
Reference: http://www.redbooks.ibm.com/redpapers/pdfs/redp4859.pdf

Special Thanks

I want to especially thank Jeff Wright (Oracle ZFSSA Product Management) and  Mark Kremkus (Fellow Oracle Storage Sales Consultant) who both contributed to this entry.

Friday Jul 13, 2012

Oracle ZFSSA Smashes IBM XIV While Running Oracle ERP On Oracle RAC DB

I recently was part of a customer proof of concept where we tested running their Oracle ERP on Oracle T-4 servers along with a Oracle RAC on T-4's and storage was an Oracle ZFS Storage Appliance 7420.   The performance we achieved was awesome!  Without any optimization we were 2.5x faster then their existing production system on running their month end close.

Big Ugly SQL Query Problems:

The DBA also ran what he called some ugly queries they run and compared the results.  Again we smoked the XIV.

How Did You Do This?

The XIV had 144 7200 RPM drives we had 60 15k Drives.  The answer is CACHE, and lots of it...  The system we used for the test had 1TB of cache split on 2 controllers, 2 TB of L2arc Read SSDs and a few write SSDs to boot! Our hybrid storage pool design accounts for a lot of the high performance.  Keeping the active IO in a higher tier significantly lowered disk IO and in return brought down the latencies....  Notice the incredible low latencies, out of about 11000 IOPS over 10000 of them are less then 1.02ms.  When we looked at the AWR report from the previous month end close the number one wait event was "db file sequential read" with an average wait of 14ms and attributed to 50% of the top 5 wait events.  I would say we made a huge dent in that analytic.

What was interesting was that we really only used about 25% of what the total hardware setup could do.  We realized application modification and parallelization could produce a much larger performance improvement.  Unfortunately we were out of time on this POC but the customer is expecting to see even larger gains in performance in the future.  

What Else Did You See?

Interestingly we turned on lzjb compression which uses almost no CPU on our controllers to see what type of compression we could get on their database.  We saw 3.28x compression on their database data which was awesome!  This is more goodness for IO.  This means we transfer 3.28x less data back and forth to the slowest component in the whole system, spinning disk.  The customer also could expect to buy much less disk.  In a test/dev environment they could use our integrated snapshot/cloning feature to quickly make many test/dev copies of the database without worry of running out of disk space.  It certainly was a win-win feature.

What Does It All Mean? 

  1. The Oracle ZFSSA is a very fast platform to run Oracle Applications and Databases on.
  2. The Oracle ZFSSA is a solid platform and well tested within Oracle and outside of Oracle for Oracle Applications and Databases. (We had zero problems or errors in our POC)
  3. The Oracle ZFSSA Dtrace Analytics bring rich performance reporting to the storage device.
  4. The Oracle ZFSSA Compression features both improve performance and save customers money.
  5. When sizing a NAS/SAN device to run Oracle, CACHE matters.... The ZFSSA Hybrid Storage Pool has one of the most advanced Cache systems around.

Tuesday Jul 10, 2012

Oracle ZFSSA Hybrid Storage Pool Demo

The ZFS Hybrid Storage Pool (HSP) has been around since the ZFSSA first launched.  It is one of the main contributors to the high performance we see on the Oracle ZFSSA both in benchmarks as well as many production environments.  Below is a short video I made to show at a high level just how impactful this HSP pool is on storage performance.  We squeeze a ton of performance out of our drives with our unique use of cache, write optimized ssd and read optimized ssd.  Many have written and blogged about this technology, here it is in action.

Demo of the Oracle ZFSSA Hybrid Storage Pool and how it speeds up workloads.

Wednesday Apr 18, 2012

7420 SPEC SFS Torches EMC/Isilon, Netapp, HDS Comparables

Price/Performance 

Another SPECsfs submission, and another confirmation that ZFSSA is a force to be reckoned with in the NFS world. The Oracle ZFSSA continues to astound with its performance benchmark numbers. Today Oracle posted the anticipated SPECsfs benchmark numbers for the 7420 that simply leave you wondering HOW?  How is Oracle technology so much faster, cost effective and efficient than the competition? I say efficient because Oracle continues to post impressive performance benchmarks surpassing competitor’s multi-million dollar configurations with 2-5X lower price points.

Comparisons

For this comparison, I grabbed the top 2-node 6240 Netapp cluster, a recent Hitachi/Bluearc submission as well as a 28 node SSD Isilon cluster (not really realistic). The Netapp is fairly close in terms of number of drives and number of controllers which makes it a good comparison. However, the ZFSSA provides 40% more performance than the 6240 at a $700,000 lower price point!!  This doesn't even factor in maintenance costs, extra software licensing and its lack of DTrace Analytics.  I will demo these in an upcoming post. 

Another interesting point if you dig into the details, during NetApp’s max 190k IOPS their latency is 3.6ms , on the other hand the ZFSSA has only 1.7ms latency for 202k IOPS.  That means with the same 190k IOPS workload Oracle would respond over 2x faster!  I threw Isilon in the mix because they refuse to post to SPC2 even though they are usually purchased for high bandwidth applications. They are over $2 Million dollars more than the ZFSSA for still posting lower performance. Hitachi is included due to market perception they have a competitive NFS system for performance.

All the links below on the respective system name will take you to the SPEC.org detailed summary.  Including hardware setup and raid type etc..  The 7420 in this case was mirrored.

 Storage System
SPEC SFS Result ops/sec (Higher is Better)
Peak Response Time (Lower is Better)  Overall Response time (Lower is Better)  # of Disks Exported TB  Estimated List Price  $/IOPS
Oracle 7420 267928  3.1  1.31  280  36.32 $430,332 $ 1.61
Netapp 6240 - 4n 260388 4.8 1.53 288 48 $1,606,048 $ 6.17
Isilon S200 - 28n 230782 7.8 3.20 672 172.3 $2,453,708 $ 10.63
Netapp 6240 190675 3.6 1.17 288 85.8 $1,178,868 $ 6.18
Hitachi 3090-G2 189994 9.5 2.08 368 36.95 ? ?
Netapp 3270 101183 4.3 1.66 360 110.08 $1,089,785 $10.77

Fast Databases, Servers, Sailboats and NFS Storage Systems

So not only do we have some of the fastest sailboats in the world, we also have some of the fastest NFS Storage Systems as well.  You could say we have a Storage benchmark Trifecta.  We have some of the top benchmarks in SPECsfs, SPC1 and SPC2.  Take a ZFSSA for a test ride today, improve performance and lower your storage costs.

Tuesday Apr 17, 2012

ZFSSA Storage Stomps IBM DS8800 and XIV

Another benchmark completed by the ZFSSA engineering team with astounding results.  This is becoming expected and usual around Oracle these days.  This is the 3rd major benchmark for the ZFSSA in the last 7 months.  Oracle released the SPC2 benchmark which scored 10,704 MBPS and earned a 2nd place in performance and 1st in terms of $/MBPS by a long margin, especially against the IBM systems also on the top 10.  The HP P9500 currently holds the number 1 position in terms of performance but is more than 2x the cost of the Oracle solution. In addition I don’t believe the HP solution has any compression capabilities, which is another significant advantage of the ZFSSA.

 MBPS  System Cost
Oracle 7420  10704  $ 377,225.38
IBM DS8800  9706  $ 2,624,257.00
IBM XIV  7468  $ 1,137,641.30

EMC and Netapp both choose not to participate in this benchmark.  I can only guess this is likely because they would have to share their $/MBPS and might not make them look so shiny anymore.  I think they may say that the configs posted are ridiculous and would never be purchased by the common customer.  There is certainly some truth to that even with their previous SPEC SFS submissions. However, this is not an issue with the oracle submission, our tested system is both within reason and practicality of what a typical customer would purchase.  Benchmarks provide a valuable resource for customers to see how the same workload works on each vendor's box without doing an in-house disruptive POC bake off.  It is unfortunate, not all vendors submit results for this and other benchmarks.

The massive performance and aggressive price points reflected in this benchmark adds to the increasing number of reasons to consider the ZFS Storage Appliance for any of your upcoming SAN or NAS storage projects.

Tuesday Apr 10, 2012

Hybrid Columnar Compression Demo Speeds Up DB Queries 6x, Reduces Space 35x

Check out this demo of the ZFSSA with HCC compression and see both space savings and performance increase.


Thursday Feb 23, 2012

Oracle Posts SPEC SFS Benchmark and Crushes Netapp Comparables

Oracle posted another shot across the bow of Netapp.  In Oct 2011 Oracle posted impressive SPC-1 benchmarks that were 2x faster and half the cost of netapp.  Now those customers looking for proof of ZFSSA's superior performance and cost have another benchmark to compare.

Why are we posting now?
For a long time the old Sun Engineering regime refused to post spec.org SFS results stating the problems with the benchmark which are true.  However some customers refused to even look at the Oracle ZFS Storage Appliance because of the lack of benchmark postings.  Our competitors like netapp and emc would use it as some sort of proof that we must perform poorly.  

But Netapp and EMC have other much larger configs that are much faster?
I should point out netapp and emc both have much larger benchmark posts to SPEC SFS, but they are ridiculous configurations that almost no customers would run and further more would be willing to pay for.  Most customers that purchase NAS to run NFS purchase many smaller 2 node HA clusters versus a 20 million dollar 24 node nas cluster.  I tried to compare and include EMC in this comparison but soon realized it was worthless in that their closest post used a celerra gateway in front of a 4 engine vmax.  The list price for that would be off the charts so I considered it not valuable for this comparison.  My goal was to get a good view of comparable systems that customers might consider for a performance oriented NAS box using NFS.  

Price Matters!
One of the major downsides of the SPEC SFS results is that they don't force vendors to post prices for customers to easily consider competitors like SPC does.  Obviously every customer wants great performance but price is always a major factor as well.  Therefore I have included the list prices as best I could figure them.  For the Netapp prices I used the following price sheet I easily found on google.  When comparing performance oriented storage customers should be comparing $/ops versus $/GB. 

Lets look at the results at a high level

 Storage System
SPEC SFS Result ops/sec (Higher is Better)
Peak Response Time (Lower is Better)  Overall Response time (Lower is Better)  # of Disks Exported TB  Estimated List Price  $/OPS
Oracle 7320 134140  2.5  1.51  136  36.96 $184,840  $1.38 
Netapp 3270 101183 4.3 1.66 360 110.08 $1,089,785 $10.77
Netapp 3160 60507 3.5 1.58 56 10.34 $258,043 $4.26

Umm, Why is the ZFSSA so much more efficient?
In a nutshell its superior engineering and the use of technologies such as the Hybrid Storage Pool (HSP) in the ZFS Storage Appliance.  The HSP extends flash technology not only to read cache but also write cache.  

The 3160 result includes the use of Netapp PAM Read flash cards.  I am not sure why a year later they didn't include them in the 3270 test if they improve performance so much?  Maybe they will post another netapp result with them now?

What now?
Now Oracle ZFSSA Engineering has posted results that again blow's away Netapp and prove our engineering is outstanding.  It makes sense that we would have an edge, when you consider that the NFS protocol itself was invented at SUN. Netapp has yet to respond with a new SPC-1 benchmark that is comparable.  I know some netapp bloggers were looking for us to post spec sfs results and thought we never would and therefore said our performance must be poor.  Now we have posted impressive results and there will be more to come, stay tuned.  As one famous blogger has said the proof is in the pudding.

About

Various information about Oracle Storage.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today