It’s time for an update on the ZFS Storage Appliance performance landscape. Moore’s law is just relentless and a lot of things are happening in ZFS land as a consequence.

First, the major vehicle to achieve better performance is our new March 2017 software release OS 8.7 (full name 2013.06.05.7.0,1-1.23). For those with a MOS account, the release notes are packed with information and worth a read and not just if you are an insomniac. But let me give you a few spoilers right here.

News Flash

The OS 8.7 release comes with a variety of improvement to unleash the power of our 2 sockets ZS5-2 and 4 sockets ZS5-4 servers starting with the much anticipated All Flash Pool (AFP) and SAS-3 disk trays (to complement the PCI gen-3 HBAs).

Just for grins, a ZS5-4 storage 2-node cluster has 8 x 18 core 2.3Ghz Xeon (that’s a lot) and holds up to 3TB of RAM (also a lot). That much CPU punch gives unmatched compression and encryption capabilities and the large RAM provides incredibly high filesystem cache hits ratios. In the high end, we’re talking multi GB/sec per SAS-3 port and 10s of GB/sec per controllers.

As a demonstration of this power, the ZS5-2 cluster was showcased using SPC-2 benchmarks delivering and aggregate mark of 24,397.12 MB/sec

More about the ZS-5 line is available here.

The ZS5-4 line will also quench your thirst for all flash storage as a ZS5-4 cluster is capable of hosting 2.4 PB of flash SSD. Even a ZS5-2 cluster with 2 trays of flash devices can approach or top 1 Million (not dollars Dr. Evil…) but IOPS and can do so on a variety of workloads. Moreover, flash devices reduce the importance of cache hit ratio, allowing good performance results even for data working sets that don’t fit into cache. Deployment using AFP is clearly the solution of choice for your datasets with the greatest I/O intensity.

If on the other hand you need more space for data that is less I/O intensive, the hybrid storage pool (HSP) economically serves up to 9PB of hard disk storage while also dynamically autotiering hot data into up to 614 TB of L2ARC flash. The flash based L2ARC is so much more powerful than in the past and is now available using devices that failover between storage controllers allowing much faster recovery after a reboot.

A single L2ARC device that commonly handles 10K IOPS basically serve the equivalent of about 50 HDDs worth of IOPS. Moreover, today’s devices have great capacity of ingest throughput. Our improved feeding code supports feeding at a rate in excess of 100 MB/sec per L2ARC device which allows the L2ARC to never miss out on “soon to be evicted” data. The improved feeding was the key to boost the L2ARC hit ratio which in turn delivers SSD based low latency IOPS for your warm data while hot data is being served from RAM. Good hit ratio in L2ARC means the HDD devices in the hybrid pool are less burdened by expensive random reads, and therefore, are available for writes and for streaming workloads where HDD excel.

Scalability and Features

Our OS 8.7 release comes with a lot of other goodies, just to quickly mention a few:
  • Trendable capacity analytics to monitor space consumption of the pool over time (the crowd goes wild!) making it easier to stay within the current capacity recommendations.
  • Metadevices store important metadata including the new dedup table.
  • LUN I/O throttling provides the ability to set throughput limits on targetted LUNs preventing a noisy neighbour effect where one heavy consumer impacts all others.
  • Asynchronous dataset deletions run those in the background for quicker failover.
  • LZ4 : compress and decompress compressible data quickly (and efficiently) and handles uncompressible data quickly (bails out).
  • More work done on the scalability of the ARC cache. With today’s memory size and ZS5 controllers, we are constantly improving our OS scalability, taking advantage in more places of improved RW locks boosting the ARC with concurrent eviction code and better tracking of ARC metadata
With all these goodies in place, read on about our updated best practices.