jeudi sept. 17, 2009

iSCSI unleashed



One of the much anticipated feature of the 2009.Q3 release of the fishworks OS is a complete rewrite of the iSCSI target implementation known as Common Multiprotocol SCSI Target or COMSTAR. The new target code is an in-kernel implementation that replaces what was previously known as the iSCSI target deamon, a user-level implementation of iSCSI.

Should we expect huge performance gains from this change ? You Bet !

But like most performance question, the answer is often : it depends. The measured performance of a given test is gated by the weakest link triggered. iSCSI is just one component among many others that can end up gating performance. If the daemon was not a limiting factor, then that's your answer.

The target deamon was a userland implementation of iSCSI : some daemon threads would read data from a storage pool and write data to a socket or vice versa. Moving to a kernel implementation opens up options to bypass at least one of the copies and that is being considered as a future option. But extra copies while undesirable do not necessarily contribute to the small packet latency or large request throughput; For small packets requests, the copy is small change compared to the request handling. For large request throughput the important things is that the data path establishes a pipelined flow in order to keep every components busy at all times.

But the way threads interact with one another can be a much greater factor in delivered performance. And there lies the problem. The old target deamon suffered from one major flaw in that each and every iSCSI requests would require multiple trips through a single queue (shared between every luns) and that queue was being read and written by 2 specific threads. Those 2 threads would end up fighting for the same locks. This was compounded by the fact that user level threads can be put to sleep when they fail to acquire a mutex and that going to sleep for a user level thread is a costly operation implying a system call and all the accounting that goes with that.

So while the iSCSI target deamon gave reasonable service for large request, it was much less scalable in terms of the number IOPS that can be served and the CPU efficiency in which it could do that. IOPS being of course a critical metrics for block protocols.

As an illustration of that with 10 client initiators and 10 threads per initiators (so 100 outstanding request) doing 8K cache-hit reads, we observed

Old Target Daemon Comstar Improvement
31K IOPS 85K IOPS 2.7X


Moreover the target daemon was consuming 7.6 CPU to service those 31K IOPS while comstar could handle 2.7X more IOPS consuming only 10 cpus, a 2X improvement in iops per cpu efficiency.

On the write side, with a disk pool that had 2 striped write optimised SSD, comstar gave us 50% more throughput (130 MB/sec vs 88MB/sec) and 60% more cpu efficiency.

Immediatedata

During our testing we noted a few interesting contributor to delivered performance. The first being the setting of iSCSI immediatedata parameter iSCSIadm(1M). On the write path, that parameter will cause the initiator iSCSI to send up to 8K of data along with the initial request packet. While this is a good idea to do so, we found that for certain sizes of writes, it would trigger some condition in the zil that caused ZFS to issue more data than necessary through the logzillas. The problem is well understood and remediation is underway and we expect to get to a situation in which keeping the default value of immediatedata=yes is the best. But as of today, for those attempting world record data transfer speeds through logzillas, setting immediatedata=no and using a 32K or 64K write size might yield positive result depending on your client OS.

Interrupt Blanking

Interested in low latency request response ? Interestingly, a chunk of that response time is lost in the obscure setting of network card drivers. Network cards will often delay pending interrupts in the hope of coalescing more packets into a single interrupt. The extra efficiency often results in more throughput at high data rate at the expense of small packet latency. For 8K request we manage to get 15% more single threaded IOPS by tweaking one such client side parameter. Historically such tuning has always been hidden in the bowel of each drivers and specific to ever client OS so that's too broad a topic to cover here. But for Solaris clients, the Crossbow framework is aiming among other thing to make latency vs throughput decision much more adaptive to operating condition relaxing the need for per workload tunings.

WCE Settings

Another important parameter to consider for comstar is the 'write cache enable' bit. By default all write request to an iSCSI lun needs to be committed to stable storage as this is what is expected by most consumers of block storage. That means that each individual write request to a disk based storage pool will take minimally a disk rotation or 5ms to 8ms to commit. This also why a write optimised SSD is quite critical to many iSCSI workloads often yeilding 10X performance improvements. Without such an SSD, iSCSI performance will appear quite lackluster particularly for lightly threaded workloads which more affected by latency characteristics.

One could then feel justified to set the write cache enable bits on some luns in order to recoup some spunk in their engine. One good news here is that in the new 2009.Q3 release of fishworks the setting is now persistent across reboots and reconnection event, fixing a nasty condition of 2009.Q2. However one should be very careful with this setting as the end consumer of block storage (exchange, NTFS, oracle,...) is quite probably operating under an unexpected set of condition. This setting can lead to application corruption in case of outage (no risk for the storage internal state).

There is one exception to this caveat and it is ZFS itself. ZFS is designed to safely and correctly operate on top of devices that have their write cached enabled. That is because ZFS will flush write caches whenever application semantics or its own internal consistency require it. So a ZPOOL created on top of iSCSI luns would be well justified to set the WCE on the lun to boost performance.

Synchronous write bias

Finally as described in my blog entry about Synchronous write bias, we now have to option to bypass the write optimised SSDs for a lun if the workload it receive is less sensitive to latency. This would be the case of a highly threaded workload doing large data transfers. Experimenting with this new property is warranted at this point.

jeudi juin 11, 2009

Compared Performance of Sun 7000 Unified Storage Array Line

The Sun Storage 7410 Unified Storage Array provides high-performance for NAS environments. Sun's product can be used on a wide variety of applications. The Sun Storage 7410 Unified Storage Array with a _single_ 10 GbE connection delivers linespeed of the 10 GbE.

  • The Sun Storage 7410 Unified Storage Array delivers 1 GB/sec throughput performance.
  • The Sun Storage 7310 Unified Storage Array delivers over 500 MB/sec on streaming writes for backups and imaging applications.
  • The Sun Storage 7410 Unified Storage Array delivers over 22000 of 8K synchronous writes per second combining great DB performance and ease of deployment of Network Attached Storage while delivering the economics benefits of inexpensice SATA disks.
  • The Sun Storage 7410 Unified Storage Array delivers over 36000 of random 8K reads per second from a 400GB working set for great Mail application responsiveness. This corresponds to an entreprise of 100000 people with every employee accessing new data every 3.6 second consolidated on a single server.


All those numbers characterise a single head of a 7410 clusterable technology. The 7000 clustering technology stores all data in dual attached disk trays and no state is shared between cluster heads (see Sun 7000 Storage clusters). This means that an active-active cluster of 2 healthy 7410 will deliver 2X the performance posted here.

Also note that the performance posted here represent what is acheived under a very tightly defined constrained workload (see Designing 11 Storage metric) and those do not represent the performance limits of the systems. This is testing 1 x 10 GbE port only; each product can have 2 or 4 10 GbE ports, and by running load across multiple ports the server can deliver even higher performance. Achieving maximum performance is a separate exercise done extremely well by my friend Brendan :

Measurement Method

To measure our performance we used the open source Filebench tool accessible from SourceForge (Filebench on solarisinternals.com). Measuring performance of a NAS storage is not an easy task. One has to deal with the client side cache which needs to be bypassed, the synchronisation of multiple clients, the presence of client side page flushing deamons which can turn asynchronous workloads into synchronous ones. Because our Storage 7000 line can have such large caches (up to 128GB of ram and more than 500GB of secondary caches) and we wanted to test disk responses, we needed to find a backdoor ways to flush those caches on the servers. Read Amithaba Filebench Kit entry on the topic in which he posts a link to the toolkit used to produce the numbers.

We recently released our first major software update 2000.Q2 and along with that a new lower cost clusterable 96 TB Storage, the 7310.

We report here the compared numbers of a 7310 with the latest software release to those previously obtained for the 7410, 7210 and 7110 systems each attached to an 18 to 20 client pool over a single 10Gbe interface with the regular frame ethernet (1500 Bytes). By the way, looking at brendan's results above, I encourage you to upgrade to use Jumbo Frames ethernet for even more performance and note that our servers can drive two 10Gbe at line speed.

Tested Systems and Metrics

The tested setup are :
        Sun Storage 7410, 4 x quad core: 16 cores @ 2.3 Ghz AMD.
        128GB of host memory.
        1 dual port 10Gbe Network Atlas Card. NXGE driver. 1500 MTU
        Streaming Tests:
        2 x J4400 JBOD,  44 x 500GB SATA drives 7.2K RPM, Mirrored pool, 
        3 Write optimized 18GB SSD, 2 Read Optimized 100GB SSD.
        IOPS tests:
        12 x J4400 JBOD, 280 x 500GB SATA drives 7.2K RPM, Mirrored pool,
        272 Data drives + 8 spares.
        8-Mirrored Write Optimised 18GB SSD, 6 Read Optimized 100GB SSD.
        FW OS : ak/generic@2008.11.20,1-0

        Sun Storage 7310,2 x quad core: 8 cores @ 2.3 Ghz AMD.
        32GB of host memory.
        1 dual port 10Gbe Network Atlas  Atlas Card (1 port used). NXGE driver. 1500 MTU
        4 x J4400 JBOD for a total 92 SATA drives  7.2K RPM
        43 mirrored pairs
        4 Write Optimised 18GB SSD, 2 Read Optimized 100GB SSD.
        FW OS : Q2 2009.04.10.2.0,1-1.15

        Sun Storage 7210, 2 x quad core: 8 cores @ 2.3 Ghz AMD
        32 GB of host memory.
        1 dual port 10Gbe Network Atlas Atlas Card (1 port used). NXGE driver. 1500 MTU
        44  x 500 GB SATA drives  7.2K RPM, Mirrored pool,
        2 Write Optimised 18 GB SSD.
        FW OS : ak/generic@2008.11.20,1-0

        Sun Storage 7110, 2 x quad core opteron: 8 cores @ 2.3 Ghz AMD
        8 GB of host memory.
        1 dual port 10Gbe Network Atlas Atlas Card (1 port used). NXGE driver. 1500 MTU
        12 x 146 GB SAS drives, 10K RPM, in 3+1 Raid-Z pool.
        FW OS : ak/generic@2008.11.20,1-0


The newly released 7310 was tested with the most recent software revision and that certainly is giving the 7310 an edge over it's peers. The 7410 on the other hand was measured here managing a much large contingent of storage, including mirrored Logzillas and 3 times as many JBODs and that is expected to account for some of the performance delta being observed.

    Metrics Short Name
    1 thread per client streaming cached reads Stream Read light
    1 thread per client streaming cold cache reads Cold Stream Read light
    10 threads per client streaming cached reads Stream Read
    20 threads per client streaming cold cached reads Cold Stream Read
    1 thread per client streaming write Stream Write light
    20 threads per client streaming write Stream Write
    128 threads per client 8k synchronous writes Sync write
    128 threads per client 8k random read Random Read
    20 threads per client 8k random read on cold caches Cold Random Read
    8 threads per client 8k small file create IOPS Filecreate


There are 6 read tests, 2 writes test and 1 synchronous write test which overwrites it's data files as a database would. A final filecreate test complete the metrics. Test executes against 20GB working set _per client_ times 18 to 20 clients. There are 4 sets used in total running over independent shares for a total of 80GB per client. So before actual runs at taken, we create all working sets or 1.6 TB of precreated data. Then before each run, we clear all caches on the clients and server.

In each of the 3 groups of 2 read tests, the first one benefits from no caching at all and the throughput delivered to the client over the network is observed to come from disk. The test runs for N seconds priming data in the Storage caches. A second run (non-cold) is then started after clearing the client side caches. Those test will see the 100% of the data delivered over the network link but not all of it is coming off the disks. Streaming test will race through the cached data and then finish off reading from disks. The random read test can also benefit from increasing cached responses as the test progresses. The exact caching characteristic of a 7000 lines will depend on a large number of parameters including your application access pattern. Numbers here reflect the performance of fully randomized test over 20GB per client x 20 clients or a 400GB working set. Upcoming studies will include more data (showing even higher performance) for workloads with higher cache hit ratio than those used here.

In a Storage 7000 server, disks are grouped together in one pool and then individual Shares are creates. Each share has access to all disk resource subject to quota (a minimum) and reservation (a maximum) that might be set. One important setup parameter associated with each share is the DB record size. It is generally better for IOPS test to use 8K records and for streaming test to use 128K records. The recordsize can be dynamically set based on expected usage.

The tests shown here were obtained with NFSv4 the default for Solaris clients (NFSv3 is expected to come out slightly better). The clients were running Solaris 10, with tuned tcp_recv_hiwat of 400K and dopageflush=0 to prevent buffered writes from being converted into synchronous writes.

Compared Results of the 7000 Storage Line

    NFSv4 Test 7410 Head
    Mirrored Pool
    7310 Head
    Mirrored Pool
    7210 Head
    Mirrored Pool
    7110 Head
    3+1 Raid-Z
    Throughput
    Cold Stream Read light 915 MB/sec 685 MB/sec 719 MB/sec 378 MB/sec
    Stream Read light 1074 MB/sec 751 MB/sec 894 MB/sec 416 MB/sec
    Cold Stream Read 959 MB/sec 598 MB/sec 752 MB/sec 329 MB/sec
    Stream Read 1030 MB/sec 620 MB/sec 792 MB/sec 386 MB/sec
    Stream Write light 480 MB/sec 507 MB/sec 490 MB/sec 226 MB/sec
    Stream Write 447 MB/sec 526 MB/sec 481 MB/sec 224 MB/sec
    IOPS
    Sync write 22383 IOPS 8527 IOPS 10184 IOPS 1179 IOPS
    Filecreate 5162 IOPS 4909 IOPS 4613 IOPS 162 IOPS
    Cold Random Read 28559 IOPS 5686 IOPS 4006 IOPS 1043 IOPS
    Random Read 36478 IOPS 7107 IOPS 4584 IOPS 1486 IOPS
    Per Spindle IOPS 272 Spindles 86 Spindles 44 Spindles 12 Spindles
    Cold Random Read 104 IOPS 76 IOPS 91 IOPS 86 IOPS
    Random Read 134 IOPS 94 IOPS 104 IOPS 123 IOPS






Analysis



The data shows that the entire Sun Storage 7000 line are throughput workhorse delivering 10 Gbps level NAS services per cluster head nodes, using a single Network Interface and single IP address for easy integration into your existing network.

As with other storage technology write streaming performance require more involvement from the storage controller and this leads to about 50% less write throughput compared to read throughput.

The use of write optimized SSD in the 7410, 7310 and 7220 also give this storage very high synchronous write capabilities. This is one of the most interesting result as it maps to database performance. The ability to sustain 24000 O_DSYNC writes at 192MB/sec of synchronized user data using only 48 inexpensive sata disks and 3 write optimized SSD is one of the many great performance characteristics of this novel storage system.

Random Read test generally map directly to individual disk capabilities and is a measure of total disk rotations. The cold runs shows that all our platforms are delivering data at the expected 100 IOPS per spindle for those SATA disks. Recall that our offering is based on the economical energy efficient 7.2 RPM disk technology. For cold random reads, a mirrored pair of 2 x 7.2K RPM offers the same total disk rotation (and IOPS) as expensive and power hungry 15 K RPM disks but in a much more economical package.

Moreover the difference between the warm and cold random read runs is showing that the Hybrid Storage Pool (HSP) is providing a 30% boost even on this workload that addresses randomly 400GB working set on 128GB of controller cache. The effective boost from the HSP can be much greater depending on the cacheability of workloads.

If we consider an organisation in which the avg mail message is 8K in size, our results show that we could consolidate 100000 employees on a single 7410 storage where each employee is accessing new data every 3.6 seconds with 70ms response time.

Messaging system are also big consumer of file creations, I've shown in the past how efficient ZFS can be at creating small files (Need Inodes ?). For the NFS protocol, file creation is a straining workload but the 7000 storage line comes out not too bad with more than 5000 filecreates per second per storage controller.

Conclusion

Performance Can never be summerised with a few numbers and we have just begun to scratch the surface here. The numbers presented here along with the disruptive pricing of the Hybrid Storage Pool will, I hope, go a long way to show the incredible power of the Open Storage architecture being proposed. And keep in mind that this performance is achievable using less expensive, less power hungry SATA drives and that every data services : NFS, CIFS, iSCSI, ftp, HTTP etc. offered by our Sun Storage 7000 servers are available at 0 additional software cost to you.

Disclosure Statement: Sun Microsystem generated results using filebench. Results reported 11/10/08 and 26/05/2009 Analysis done on June 6 2009.

mardi mars 03, 2009

Performance of the Hybrid Storage Pool

Hope to be able to keep your attention for 30 minutes of impromptu conversation about OpenStorage performance. At the end of the piece, I show my geeky capture/replay technology.

vendredi févr. 13, 2009

Need Inodes ?

It seems that some old school filesystem still need to statically allocate inodes to hold pointers to individual files. Normally this should not cause too much problems as default settings account for an average filesize of 32K. Or will it ?

If the avg filesize you need to store on the filesystem is much smaller than this, they you are likely to eventually run out of inodes even if the space consumed on the storage is far from exhausted.

In ZFS inodes are allocated on demand and so the question came up, how many files can I store onto a piece of storage. I managed to scrape up an old disk of 33GB, created a pool and wanted to see how many 1K files I could store on that storage.

ZFS stores files with the smallest number of sectors possible and so 2 sectors was enough to store the data. Then of course one needs to also store some amount of metadata, indirect pointer, directory entries etc to complete the story. There I didn't know what to expect. My program would create 1000 files per directory. Max depth level is 2, nothing sophisticated attempted here.

So I let my program run for a while and eventually interrupted it at 86% of disk capacity :

        Filesystem  size  used avail capacity Mounted on
	space          33G  27G  6.5G  81%  /space
Then I counted the files.

        #ptime find /space/create | wc
	real  51:26.697
	user   1:16.596
	sys   25:27.416
	23823805 23823805 1405247330


So 23.8M files consuming 27GB of data. Basically less than 1.2K of used disk space per KB of files. A legacy type filesystem that would allocate one inode per 32K would have run out of space after a meager 1M files but ZFS managed to store 23X more on the disk without any tuning.

The find command here is mostly gated on fstat performance and we see here that we did the 23.8M fstat in 3060 seconds or 7777 fstat per second.

But here is the best part : And how long did it take to create all those files ?

   real 1:09:20.558
   user   9:20.236
   sys  2:52:53.624


This is hard to believe but it took about 1 hour for 23.8 million files.This is on a single direct attach drive

   3. c1t3d0 <FUJITSU-MAP3367N SUN36G-0401-33.92GB>


ZFS created on average 5721 files per second. Now obviously such a drive cannot do 5721 IOPS but with ZFS it didn't need to. File create is actually more of a cpu benchmark because the application is interacting with host cache. It's the task of the filesystem to then create the files on disk in the background. With ZFS, the combination of the Allocated on Write policy and the sophisticated I/O aggregation in the I/O scheduler (dynamics) means that the I/O for multiple independant file create can be coalesced. Using dtrace I counted the number of IO required and filecreates per minutes, typical samples show more than 200K files created per minutes using about 3000 IO per minutes or 3300 files per second using a mere 50 IOPS !!!

Per Minute
	   Sample Create  IOs
	   #1	  214643  2856
	   #2	  215409  3342
	   #3	  212797  2917
	   #4	  211545  2999


Finally with all these files, is scrubbing a problem ? It took 1h34m to actually scrub that many files at a pace of 4200 scrubbed files per second. No sweat.

pool: space
 state: ONLINE
 scrub: scrub completed after 1h34m with 0 errors on Wed Feb 11 12:17:20 2009


If you need to create, store and otherwise manipulate lots of small files efficiently, ZFS has got to be you filesystem of choice for you.

lundi nov. 10, 2008

Designing Performance Metrics for Sun Storage 7000

One of the necessary checkpoint before launching a product is to be able to assess it's performance. With Sun Storage 7xxx we had a challenge in that the only NFS benchmark of notoriety was SPEC SFS. Now this benchmark will have it's supporters and some customers might be attached to it but it's important to understand what a benchmarks actually says.

These SFS benchmark is a lot about "cache busting" the server : this is interesting but at Sun we think that Caches are actually helpful in real scenarios. Data goes in cycles in which it becomes hot at times. Retaining that data in cache layers allow much lower latency access, and much better human interaction with storage engines. Being a cache busting benchmark, SFS numbers end up as a measure of the number of disk rotation attached to the NAS server. So good SFS result requires 100 or 1000 of expensive, energy hungry 15K RPM spindles. To get good IOPS, layers of caching are more important to the end user experience and cost efficiency of the solution.

So we needed another way to talk about performance. Benchmarks tend to test the system in peculiar ways that not necessarely reflect the workloads each customer is actually facing. There are very many workload generators for I/O but one interesting one that is OpenSource and extensible is Filebench available in Source.

So we used filebench to gather basic performance information about our system with the hope that customers will then use filebench to generate profiles that map to their own workloads. That way, different storage option can be tested on hopefully more meaningful tests than benchmarks.

Another challenge is that a NAS server interacts with client system that themselve keep a cache of the data. Given that we wanted to understand the back-end storage, we had to setup the tests to avoid client side caching as much as possible. So for instance between the phase of file creation and the phase of actually running the tests we needed to clear the client caches and at times the server caches as well. These possibilities are not readily accessible with the simplest load generators and we had to do this in rather ad-hoc fashion. One validation of our runs was to insure that the amount of data transfered over the wire, observed with Analytics was compatible with the aggregate throughput measured at the client.

Still another challenge was that we needed to test a storage system designed to interact with large number of clients. Again load generators are not readily setup to coordinate multiple client and gather global metrics. During the course of the effort filebench did come up with a clustered mode of operation but we actually where too far engaged in our path to take advantage of it.

This coordination of client is important because, the performance information we want to report is actually the one that is delivered to the client. Now each client will report it's own value for a given test and our tool will sum up the numbers; but such a Sum is only valid inasmuch as the tests ran on the clients in the same timeframe. The possibility of skew between tests is something that needs to be monitored by the person running the investigation.

One way that we increased this coordination was that we divided our tests in 2 categories; those that required precreated files, and those that created files during the timed portion of the runs. If not handled properly, file creation would actually cause important result skew. The option we pursued here was to have a pre-creation phase of files that was done once. From that point, our full set of metrics could then be run and repeated many times with much less human monitoring leading to better reproducibility of results.

Another goal of this effort was that we wanted to be able to run our standard set of metrics in a relatively short time. Say less than 1 hours. In the end we got that to about 30 minutes per run to gather 10 metrics. Having a short amount of time here is important because there are lots of possible ways that such test can be misrun. Having someone watch over the runs is critical to the value of the output and to it's reproducibility. So after having run the pre-creation of file offline, one could run many repeated instance of the tests validating the runs with Analytics and through general observation of the system gaining some insight into the meaning of the output.

At this point we were ready to define our metrics.

Obviously we needed streaming reads and writes. We needed ramdom reads. We needed small synchronous writes important to Database workloads and to the NFS protocol. Finally small filecreation and stat operation completed the mix. For random reading we also needed to distinguish between operating from disks and from storage side caches, an important aspect of our architecture.

Now another thing that was on my mind was that, this is not a benchmark. That means we would not be trying to finetune the metrics in order to find out just exactly what is the optimal number of threads and request size that leads to best possible performance from the server. This is not the way your workload is setup. Your number of client threads running is not elastic at will. Your workload is what it is (threading included); the question is how fast is it being serviced.

So we defined precise per client workloads with preset number of thread running the operations. We came up with this set just as an illustration of what could be representative loads :
    1- 1 thread streaming reads from 20G uncached set, 30 sec. 
    2- 1 thread streaming reads from same set, 30 sec.
    3- 20 threads streaming reads from 20G uncached set, 30 sec.
    4- 10 threads streaming reads from same set, 30 sec.
    5- 20 threads 8K random read from 20G uncached set, 30 sec.
    6- 128 threads 8K random read from same set, 30 sec.
    7- 1 thread streaming write, 120 sec
    8- 20 threads streaming write, 120 sec
    9- 128 threads 8K synchronous writes to 20G set, 120 sec
    10- 20 threads metadata (fstat) IOPS from pool of 400k files, 120 sec
    11- 8 threads 8K file create IOPS, 120 sec. 


For each of the 11 metrics, we could propose mapping these to relevant industries :
     1- Backups, Database restoration (source), DataMining , HPC
     2- Financial/Risk Analysis, Video editing, HPC
     3- Media Streaming, HPC
     4- Video Editing
     5- DB consolidation, Mailserver, generic fileserving, Software development.
     6- DB consolidation, Mailserver, generic fileserving, Software development.
     7- User data Restore (destination)
     8- Financial/Risk Analysis, backup server
     9- Database/OLTP
     10- Wed 2.0, Mailserver/Mailstore, Software Development
     11- Web 2.0, Mailserver/Mailstore, Software Development 


We managed to get all these tests running except the fstat (test 10) due to a technicality in filebench. Filebench insisted on creating the files up front and this test required thousands of them; moreover filebench used a method that ended up single threaded to do so and in the end, the stat information was mostly cached on the client. While we could have plowed through some of the issues the conjunctions of all these made us put the fstat test on the side for now.

Concerning thread counts, we figured that single stream read test was at times critical (for administrative purposes) and an interesting measure of the latency. Test 1 and 2 were defined this way with test 1 starting with cold client and server caches and test 2 continuing the runs after having cleared the client cache (but not the server) thus showing the boost from server side caching. Test 3 and 4 are similarly defined with more threads involved for instance to mimic a media server. Test 5 and 6 did random read tests, again with test 5 starting with a cold server cache and test 6 continuing with some of the data precached from test 5. Here, we did have to deal with client caches trying to insure that we don't hit in the client cache too much as the run progressed. Test 7 and 8 showcased streaming writes for single and 20 streams (per client). Reproducibility of test 7 and 8 is more difficult we believe because of client side fsflush issue. We found that we could get more stable results tuning fsflush on the clients. Test 9 is the all important synchronous write case (for instance a database). This test truly showcases the benefit of our write side SSD and also shows why tuning the recordsize to match ZFS records with DB accesses is important. Test 10 was inoperant as mentioned above and test 11 filecreate, completes the set.

Given that those we predefined test definition, we're very happy to see that our numbers actually came out really well with these tests particularly for the Mirrored configs with write optimized SSDs. See for instance results obtained by Amitabha Banerjee .

I should add that these can now be used to give ballpark estimate of the capability of the servers. They were not designed to deliver the topmost numbers from any one config. The variability of the runs are at times more important that we'd wish and so your mileage will vary. Using Analytics to observe the running system can be quite informative and a nice way to actually demo that capability. So use the output with caution and use your own judgment when it comes to performance issues.

Using ZFS as a Network Attach Controller and the Value of Solid State Devices



So Sun is coming out today with a line of Sun Storage 7000 systems that have ZFS as the integrated volume and filesystem manager using both read and write optimized SSD. What is this Hybrid Storage Pool and why is this a good performance architecture for storage ?

A write optimized SSD is a custom designed device for the purpose of accelerating operations of the ZFS intent log (ZIL). The ZIL is the part of ZFS that manages the important synchronous operation guaranteeing that such writes are acknowledged quickly to applications while guaranteeing persistence in case of outage. Data stored in the ZIL is also kept in memory until ZFS issue the next Transaction Groups (every few seconds).

The ZIL is what stores data urgently (when application is waiting) but the TXG is what stores data permanently. The ZIL on-disk blocks are only ever re-read after a failure such as power outage. So the SSDs that are used to accelerate the ZIL are write-optimized : they need to handle data at low latency on writes; reads are unimportant.

The TXG is an operation that is asynchronous to applications : apps are generally not waiting for transactions groups to commit. The exception here is when data is generated at a rate that exceeds the TXG rate for a sustained period of time. In this case, we become throttled by the pool throughput. In a NAS storage this will rarely happen since network connectivity even at GB/s is still much less that what storage is capable of and so we do not generate the imbalance.

The important thing now is that in a NAS server, the controller is also running a file level protocol (NFS or CIFS) and so is knowledgeable about the nature (synchronous or not) of the requested writes. As such it can use the accelerated path (the SSD) only for the necessary component of the workloads. Less competition for these devices means we can deliver both high throughput and low latency together in the same consolidated server.

But here is where is gets nifty. At times, a NAS server might receive a huge synchronous request. We've observed this for instance due to fsflush running on clients which will turn non-synchronous writes into a massive synchronous one. I note here that a way to reduce this effect, is to tune up fsflush (to say 600). This is commonly done to reduce the cpu usage of fsflush but will be welcome in the case of client interacting with NAS storage. We can also disable page flushing entirely by setting dopageflush to 0. But that is a client issue. From the perspective of the server, we still need as a NAS to manage large commit request.

When subject to such a workload, say 1GB commit, ZFS being all aware of the situation, can now decide to bypass the SDD device and issue request straight to disk based pool blocks. It would do so for 2 reasons. One is that the pool of disks in it's entirety has more throughput capabilities than the few write optimized SSD and so we will service this request faster. But more importantly, the value of the SSD is in it's latency reduction aspect. Leaving the SSDs available to service many low latency synchronous writes is considered valuable here. Another way to say this is that large writes are generally well served by regular disk operations (they are throughput bound) whereas small synchronous writes (latency bound) can and will get help from the SSDs.

Caches at work

On the read path we also have custom designed read optimized SSDs to fit in these OpenStorage platforms. At Sun, we just believe that many workloads will naturally lend to caching technologies. In a consolidated storage solution, we can offer up to 128GB of primary memory based caching and approximately 500GB of SSD based caching.

We also recognized that the latency delta between memory cached response and disk response was just too steep. By inserting a layer of SSD between memory and disk, we have this intermediate step providing lower latency access than disk to a working set which is now many times greater than memory.

It's important here to understand how and when these read optimized SSD will work. The first thing to recognized is that the SSD will have to be primed with data. They feed off data being evicted from the primary caches. So their effect will not immediately seen at the start of a benchmarks. Second, one of the value of read optimized SSD is truly in low latency responses to small requests. Small request here means things of the order of 8K in size. Such request will occur either when dealing with small files (~8K) or if dealing with larger size but with fix record based application, typically a database. For those application it is customary to set the recordsize and this will allow those new SSDs to become more effective.

Our read optimized SSD can service up to 3000 read IOPS (see Brendan's work on the L2 ARC) and this is close or better to what a 24 x 7.2 RPM disks JBOD can do. But the key point is that the low latency response means it can do so using much fewer threads that would be necessary to reach the same level on a JBOD. Brendan demonstrated here that the response time of these devices can be 20 times faster than disks and 8 to 10 times faster from the client's perspective. So once data is installed in the SSD, users will see their requests serviced much faster which means we are less likely to be subject to queuing delays.

The use of read optimized SSD is configurable in the Appliance. Users should learn to identify the part of their datasets that end up gated by lightly threaded read response time. For those workloads enabling the secondary cache is one way to deliver the value of the read optimized SSD. For those filesystems, if the workload contains small files (such as 8K) there is no need to tune anything, however for large files access in small chunks setting the filesystem recordsize to 8K is likely to produce the best response time.

Another benefit to these SSDs will be in the $/IOPS case. Some workloads are just IOPS hungry while not necessarely huge block consumers. The SSD technology offers great advantages in this space where a single SDD can deliver the IOPS of a full JBOD at a fraction of the cost. So with workloads that are more modestly sized but IOPS hungry a test drive of the SSD will be very interesting.

It's also important to recognized that these systems are used in consolidation scenarios. It can be that some part of the applications will be sped up by read or write optimized SSD, or by the large memory based caches while other consolidated workloads can exercise other components.

There is another interesting implication to using SSD in the storage in regards to clustering. The read optimized ssd acting as caching layers actually never contain critical data. This means those SSD can go into disk slots of head nodes since there is no data to be failed over. On the other hand, write optimized SSD will store data associated with the critical synchronous writes. But since those are located in dual-ported backend enclosures, not the head nodes, it implies that, during clustered operations, storage head nodes do not have to exchange any user level data.

So by using ZFS and read and write optimized SSDs, we can deliver low latency writes for application that rely on them, and good throughput for synchronous and non synchronous case using cost effective SATA drives. Similarly on the read size, the high amount of primary and secondary caches enables delivering high IOPS at low latency (even if the workload is not highly threaded) and it can do so using the more cost and energy efficient SATA drive.

Our architecture allows us to take advantage of the latency accelerators while never being gated by them.

Delivering Performance Improvements to Sun Storage 7000


I describe here the effort I spearheaded studying the performance characteristics of the OpenStorage platform and the ways in which our team of engineers delivered real out of the box improvements to the product that is shipping today.

One of the Joy of working on the OpenStorage NAS appliance was that solutions we found to performance issues could be immediately transposed into changes to the appliance without further process.

The first big wins

We initially stumble on 2 major issues, one for NFS synchronous writes and one for the CIFS protocol in general. The NFS problem was a subtle one involving the distinction of O_SYNC vs O_DSYNC writes in the ZFS intent log and was impacting our threaded synchronous writes test by up to a 20X factor. Fortunately I had an history of studying that part of the code and could quickly identify the problem and suggest a fix. This was tracked as 6683293: concurrent O_DSYNC writes to a fileset can be much improved over NFS.

The following week, turning to CIFS studies, we were seeing great scalability limitation in the code. Here again I was fortunate to be the first one to hit this. The problem was that to manage CIFS request the kernel code was using simple kernel allocations that could accommodate the largest possible request. Such large allocations and deallocations causes what is known as a storm of TLB shootdown cross-calls limiting scalability.

Incredibly though after implementing the trivial fix, I found that the rest of the CIFS server was beautifully scalable code with no other barriers. So in one quick and simple fix (using kmem caches) I could demonstrate a great scalability improvements to CIFS. This was tracked as 6686647 : smbsrv scalability impacted by memory

Since those 2 protocol problems were identified early on, I must say that no serious protocol performance problems have come up. While we can always find incremental improvements to any given test, our current implementation has held up to our testing so far.

In the next phase of the project, we did a lot of work on improving network efficiency at high data rate. In order to deliver the throughput that the server is capable of, we must use 10Gbps network interface and the one available on the NAS platforms are based on the Neptune networking interface running the nxge driver.

Network Setup

I collaborated on this with Alan Chiu that already new a lot about this network card and driver tunables and so we quickly could hash out the issues. We had to decide for a proper out of the box setup involving
	- how many MSI-X interrupts to use
	- whether to use networking soft rings or not
	- what bcopy threshold to use in the driver as opposed to
	  binding dma.
	- Whether to use or not the new Large Segment Offload (LSO)
	  technique for transmits.
We new basically where we wanted to go here. We wanted many interrupts on receive side so as to not overload any CPU and avoid the use of layered softrings which reduces efficiency. A low bcopy threshold so that dma binding be used more frequently as the default value was too high for this x64 based platform. And LSO was providing a nice boost to efficiency. That got us to some proper efficiency level.

However we noticed that under stress and high number of connections our efficiency would drop by 2 or 3 X. After much head scratching we rooted this to the use of too many TX dma channels. It turns out that with this driver and architecture using a few channels leads to more stickyness in the scheduling and much much greater efficiency. We settled on 2 tx rings as a good compromise. That got us to a level of 8-10 cpu cycles per byte transfered in network code (more on Performance Invariants). Interrupt Blanking

Studying a Opensource alternative controller, we also found that on 1 of 14 metrics we where slower. That was rooted in the interrupt blanking parameter that NIC use to gain efficiency. What we found here was that by reducing our blanking to a small value we could leapfrog the competition (from 2X worse to 2X better) on this test while preserving our general network efficiency. We were then on par or better for every one of the 14 tests.

Media Streaming

When we ran thousand or 1 Mb/s media streams from our systems we quickly found that the file level software prefetching was hurting us. So we initially disabled the code in our lab to run our media studies but at the end of the project we had to find an out of the box setup that could preserve our Media result without impairing maximum read streaming. At some point we realized that what we were hitting 6469558: ZFS prefetch needs to be more aware of memory pressure. It turns out that the internals of zfetch code is setup to manage 8 concurrent streams per file and can readahead up to 256 blocks or records : in this case 128K. So when we realized that with 1000s of streams we could readahead ourself out of memory, we knew what we needed to do. We decided on setting up 2 streams per file reading ahead up to 16 blocks and that seems quite sufficient to retain our media serving throughput while keeping so prefetching capabilities. I note here also is that NFS client code will themselve recognize streaming and issue their own readahead. The backend code is then reading ahead of client readahead requests. So we kind of where getting ahead of ourselves here. Read more about it @ cndperf

To slog or not to slog

One of the innovative aspect of this Openstorage server is the use of read and write optimized solid state devices; see for instance The Value of Solid State Devices.

Those SSD are beautiful devices designed to help latency but not throughput. A massive commit is actually better handled by regular storage not ssd. It turns out that it was actually dead easy to instruct the ZIL to recognize massive commits and divert it's block allocation strategy away from the SSD toward the common pool of disks. We see two benefits here, the massive commits will sped up (preventing the SSD from becoming the bottleneck) but more importantly the SSD will now be available as low latency devices to handle workloads that rely on low latency synchronous operations. One should note here that the ZIL is a "per filesystem" construct and so while a filesystem might be working on a large commit another filesystem from the same pool might still be running a series of small transaction and benefit from the write optimized SSD.

In a similar way, when we first tested the read-optimized ssds , we quickly saw that streamed data would install in this caching layer and that it could slow down the processing later. Again the beauty of working on an appliance and closely with developers meant that the following build, those problems had been solved.

Transaction Group Time

ZFS operates by issuing regular transaction groups in which modifications since last transaction group are recorded on disk and the ueberblock is updated. This used to be done at a 5 second interval but with the recent improvement to the write throttling code this became a 30 second interval (on light workloads) which aims to not generate more than 5 seconds of I/O per transaction groups. Using 5 seconds of I/O per txg was used to maximize the ratio of data to metadata in each txg, delivering more application throughput. Now these Storage 7000 servers will typically have lots of I/O capability on the storage side and the data/metadata is not as much a concern as for a small JBOD storage. What we found was that we could reduce the the target of 5 second of I/O down to 1 while still preserving good throughput. Having this smaller value smoothed out operation.

IT JUST WORKS

Well that is certainly the goal. In my group, we spent the last year performance testing these OpenStorage systems finding and fixing bugs, suggesting code improvements, and looking for better compromise for common tunables. At this point, we're happy with the state of the systems particularly for mirrored configuration with write optimized SSD accelerators. Our code is based on a recent OpenSolaris (from august) that already has a lot of improvements over Solaris 10 particularly for ZFS, to which we've added specific improvements relevant to NAS storage. We think these systems will at times deliver great performance (see Amithaba's results ) but almost always shine in the price performance categories.
About

user13278091

Search

Categories
Archives
« avril 2014
lun.mar.mer.jeu.ven.sam.dim.
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
    
       
Today
News
Blogroll

No bookmarks in folder