lundi nov. 10, 2008

Sun Storage 7000 Performance invariants



I see many reports about running campains of test measuring performance over a test matrix. One problem with this approach is of course the Matrix. That matrix never big enough for the consumer of the information ("can you run this instead ?").

A more useful approach is to think in terms of performance invariants. We all know that 7.2K RPM disk drive can do 150-200 IOPS as an invariant and disks will have throughput limit such as 80MB/sec. Thinking in terms of those invariant helps in extrapolating performance data (with caution) and observing breakdowns in invariant is often a sign that something else needs to be root caused.

So using 11 metrics and our Performance engineering effort what can be our guiding invariants ? Bearing in mind that it is expected that those are rough estimate. For real measured numbers check out Amitabha Banerjee's excellent post on Analyzing the Sun Storage 7000.

Streaming : 1 GB/s on server and 110 MB/sec on client

For read Streaming wise, we're observing that 1GB/s is somewhat our guiding number for read streaming . This can be acheived with fairly small number of client and threads but will be easier to reach if the data is prestaged in server caches. A client normally running 1Gbe network cards is able to extract 110 MB/sec rather easily. Read streaming will be easier to acheived with the larger 128K records probably due to the lesser CPU demand. While our results are with regular 1500 Bytes ethernet frames, using jumbo frame will also make this limit easier to reach or even break. For a mirrored pool, data needs to be sent twice to the storage and we see a reduction of about 50% for write streaming workloads.

Random Read I/Os per second : 150 random read IOPS per mirrored disks

This is probably a good guiding light also. When going to disks that will be a reasonable expectation. But here caching can radically change this. Since we can configure up to 128GB of host ram and 4 times that much of secondary caches, there are opportunity to break this barrier. But when going to spindles that needs to be kept under consideration. We also know that Raid-z spreads records to all disks. So the 150 IOPS limit basically applies to raid-z groups. Do plan to have many groups to service random reads.

Random Read I/Os per second using SSDs : 3100 Read IOPS per Read Optimized SSD

In some instances, data after eviction from main memory will be kept in secondary caches. Small files and tuned recordsize filesystem are good target workload for this. Those read-optimized SSD can restitute this data at a rate of 3100 IOPS L2 ARC). More importantly so it can do so at much reduced latency meaning that lightly threaded workloads will be able to acheive high throughput.

Synchronous writes per second : 5000-9000 Synchronous write per Write Optimized SSD

Synchronous writes can be generated by a O_DSYNC write (database) or just as part of the NFS protocol (such as the tar extract : open,write,close workloads). Those will reach the NAS server and be coalesced in a single transaction with the separate intent log. Those SSD devices are great latency accelerators but are still devices with a max throughput of around 110 MB/sec. However our code actually detects when the SSD devices become the bottleneck and will divert some of the I/O request to use the main storage pool. The net of all this is a complex equation but we've observed easily 5000-8000 synchronous writes per SSD up to 3 devices (or 6 in mirrored pairs). Using smaller working set which creates less competition for CPU resources we've even observed 48K synchronous writes per second.



Cycles per Bytes : 30-40 cycles per byte for NFS and CIFS

Once we include the full NFS or CIFS protocol, the efficiency was observed to be in the 30-40 cycles per byte (8 to 10 of those coming from the pure network component at regulat 1500 bytes MTU). More studies are required to figure out the extent to which this is valid but it's an interesting way to look at the problem. Having to run disk I/O vs being serviced directly from cached data is expected to exert an additional 10-20 cycles per byte. Obviously for metadata test in which small amount of byte is transfered per operation, we probably need to come up with a cycles/MetaOps invariant but that is still TBD.

Single Client NFS throughput : 1 TCP Window per round trip latency.

This is one fundamental rule of network throughput but it's a good occasion to refresh this in everyones mind. Clients, at least solaris clients, will establish a single TCP connection to a server. On that connection there can be a large number of unreleated requests as NFS is a very scalable protocol. However, a single connection will transport data at a maximum speed of a "socket buffer" divided by the round trip latency. Since today's network speed, particularly in wide area networks have grown somewhat faster than default socket buffers we can see such things becoming performance bottleneck. Now given that I work in Europe but my tests systems are often located in california, I might be a little more sensitive than most to this fact. So one important change we did early on, in this project was to simply bump up the default socket buffers in the 7000 line to 1MB. However for read throughput under similar conditions, we can only advise you to do the same to your client infrastructure.

Designing Performance Metrics for Sun Storage 7000

One of the necessary checkpoint before launching a product is to be able to assess it's performance. With Sun Storage 7xxx we had a challenge in that the only NFS benchmark of notoriety was SPEC SFS. Now this benchmark will have it's supporters and some customers might be attached to it but it's important to understand what a benchmarks actually says.

These SFS benchmark is a lot about "cache busting" the server : this is interesting but at Sun we think that Caches are actually helpful in real scenarios. Data goes in cycles in which it becomes hot at times. Retaining that data in cache layers allow much lower latency access, and much better human interaction with storage engines. Being a cache busting benchmark, SFS numbers end up as a measure of the number of disk rotation attached to the NAS server. So good SFS result requires 100 or 1000 of expensive, energy hungry 15K RPM spindles. To get good IOPS, layers of caching are more important to the end user experience and cost efficiency of the solution.

So we needed another way to talk about performance. Benchmarks tend to test the system in peculiar ways that not necessarely reflect the workloads each customer is actually facing. There are very many workload generators for I/O but one interesting one that is OpenSource and extensible is Filebench available in Source.

So we used filebench to gather basic performance information about our system with the hope that customers will then use filebench to generate profiles that map to their own workloads. That way, different storage option can be tested on hopefully more meaningful tests than benchmarks.

Another challenge is that a NAS server interacts with client system that themselve keep a cache of the data. Given that we wanted to understand the back-end storage, we had to setup the tests to avoid client side caching as much as possible. So for instance between the phase of file creation and the phase of actually running the tests we needed to clear the client caches and at times the server caches as well. These possibilities are not readily accessible with the simplest load generators and we had to do this in rather ad-hoc fashion. One validation of our runs was to insure that the amount of data transfered over the wire, observed with Analytics was compatible with the aggregate throughput measured at the client.

Still another challenge was that we needed to test a storage system designed to interact with large number of clients. Again load generators are not readily setup to coordinate multiple client and gather global metrics. During the course of the effort filebench did come up with a clustered mode of operation but we actually where too far engaged in our path to take advantage of it.

This coordination of client is important because, the performance information we want to report is actually the one that is delivered to the client. Now each client will report it's own value for a given test and our tool will sum up the numbers; but such a Sum is only valid inasmuch as the tests ran on the clients in the same timeframe. The possibility of skew between tests is something that needs to be monitored by the person running the investigation.

One way that we increased this coordination was that we divided our tests in 2 categories; those that required precreated files, and those that created files during the timed portion of the runs. If not handled properly, file creation would actually cause important result skew. The option we pursued here was to have a pre-creation phase of files that was done once. From that point, our full set of metrics could then be run and repeated many times with much less human monitoring leading to better reproducibility of results.

Another goal of this effort was that we wanted to be able to run our standard set of metrics in a relatively short time. Say less than 1 hours. In the end we got that to about 30 minutes per run to gather 10 metrics. Having a short amount of time here is important because there are lots of possible ways that such test can be misrun. Having someone watch over the runs is critical to the value of the output and to it's reproducibility. So after having run the pre-creation of file offline, one could run many repeated instance of the tests validating the runs with Analytics and through general observation of the system gaining some insight into the meaning of the output.

At this point we were ready to define our metrics.

Obviously we needed streaming reads and writes. We needed ramdom reads. We needed small synchronous writes important to Database workloads and to the NFS protocol. Finally small filecreation and stat operation completed the mix. For random reading we also needed to distinguish between operating from disks and from storage side caches, an important aspect of our architecture.

Now another thing that was on my mind was that, this is not a benchmark. That means we would not be trying to finetune the metrics in order to find out just exactly what is the optimal number of threads and request size that leads to best possible performance from the server. This is not the way your workload is setup. Your number of client threads running is not elastic at will. Your workload is what it is (threading included); the question is how fast is it being serviced.

So we defined precise per client workloads with preset number of thread running the operations. We came up with this set just as an illustration of what could be representative loads :
    1- 1 thread streaming reads from 20G uncached set, 30 sec. 
    2- 1 thread streaming reads from same set, 30 sec.
    3- 20 threads streaming reads from 20G uncached set, 30 sec.
    4- 10 threads streaming reads from same set, 30 sec.
    5- 20 threads 8K random read from 20G uncached set, 30 sec.
    6- 128 threads 8K random read from same set, 30 sec.
    7- 1 thread streaming write, 120 sec
    8- 20 threads streaming write, 120 sec
    9- 128 threads 8K synchronous writes to 20G set, 120 sec
    10- 20 threads metadata (fstat) IOPS from pool of 400k files, 120 sec
    11- 8 threads 8K file create IOPS, 120 sec. 


For each of the 11 metrics, we could propose mapping these to relevant industries :
     1- Backups, Database restoration (source), DataMining , HPC
     2- Financial/Risk Analysis, Video editing, HPC
     3- Media Streaming, HPC
     4- Video Editing
     5- DB consolidation, Mailserver, generic fileserving, Software development.
     6- DB consolidation, Mailserver, generic fileserving, Software development.
     7- User data Restore (destination)
     8- Financial/Risk Analysis, backup server
     9- Database/OLTP
     10- Wed 2.0, Mailserver/Mailstore, Software Development
     11- Web 2.0, Mailserver/Mailstore, Software Development 


We managed to get all these tests running except the fstat (test 10) due to a technicality in filebench. Filebench insisted on creating the files up front and this test required thousands of them; moreover filebench used a method that ended up single threaded to do so and in the end, the stat information was mostly cached on the client. While we could have plowed through some of the issues the conjunctions of all these made us put the fstat test on the side for now.

Concerning thread counts, we figured that single stream read test was at times critical (for administrative purposes) and an interesting measure of the latency. Test 1 and 2 were defined this way with test 1 starting with cold client and server caches and test 2 continuing the runs after having cleared the client cache (but not the server) thus showing the boost from server side caching. Test 3 and 4 are similarly defined with more threads involved for instance to mimic a media server. Test 5 and 6 did random read tests, again with test 5 starting with a cold server cache and test 6 continuing with some of the data precached from test 5. Here, we did have to deal with client caches trying to insure that we don't hit in the client cache too much as the run progressed. Test 7 and 8 showcased streaming writes for single and 20 streams (per client). Reproducibility of test 7 and 8 is more difficult we believe because of client side fsflush issue. We found that we could get more stable results tuning fsflush on the clients. Test 9 is the all important synchronous write case (for instance a database). This test truly showcases the benefit of our write side SSD and also shows why tuning the recordsize to match ZFS records with DB accesses is important. Test 10 was inoperant as mentioned above and test 11 filecreate, completes the set.

Given that those we predefined test definition, we're very happy to see that our numbers actually came out really well with these tests particularly for the Mirrored configs with write optimized SSDs. See for instance results obtained by Amitabha Banerjee .

I should add that these can now be used to give ballpark estimate of the capability of the servers. They were not designed to deliver the topmost numbers from any one config. The variability of the runs are at times more important that we'd wish and so your mileage will vary. Using Analytics to observe the running system can be quite informative and a nice way to actually demo that capability. So use the output with caution and use your own judgment when it comes to performance issues.

mardi nov. 04, 2008

People ask: where are we with ZFS performance ?

The standard answer to any computer performance question is almost always : "it depends" which is semantically equivalent to "I don't know". The better answer is to state the dependencies.


I would certainly like to see every performance issue studied with a scientific approach. OpenSolaris and Dtrace are just incredible enablers when trying to reach root cause and finding those causes is really the best way to work toward delivering improved performance. More generally tough, people use common wisdom or possible faulty assumption to match their symptoms with that of other similar reported problems. And, as human nature has it, we'll easily blame the component we're least familiar with for problems. So we often end up with a lot of report of ZFS performance that once, drilled down, become either totally unrelated to ZFS (say HW problems) , or misconfiguration, departure from Best Practices or, at times, unrealistic expectations.


That does not mean, there are no issues. But it's important that users can more easily identify known issues, schedule for fixes, workarounds etc. So anyone deploying ZFS should really be familiar with those 2 sites : ZFS Best Practices and Evil Tuning Guide


That said, what are real commonly encountered performance problems I've seen and where do we stand ?


Writes overunning memory


That is a real problem that was fixed last March and is integrated in the Solaris U6 release. Running out of memory causes many different types of complaints and erratic system behavior. This can happen anytime a lot of data is created and streamed at rate greater than that which can be set into the pool. Solaris U6 will be an important shift for customers running into this issue. ZFS will still try to use memory to cache your data (a good thing) but the competition this creates for memory resources will be much reduced. The way ZFS is designed to deal with this contention (ARC shrinking) will need a new evaluation from the community. The lack of throttling was a great impairement to the ability of the ARC to give back memory under pressure. In the mean time lots of people are capping their arc size with success as per the Evil Tuning guide.


For more on this topic check out : The new ZFS write throttle


Cache flushes on SAN storage


This is a common issue we hit in the entreprise. Although it will cause ZFS to be totally underwhelming in terms of performance, it's interestingly not a sign of any defect in ZFS. Sadly this touches customers that are the most performance minded. The issue is somewhat related to ZFS and somewhat to the Storage. As is well documented elsewhere, ZFS will, at critical times, issue "cache flush" request to the storage elements on which is it layered. This is to take into account the fact that storage can be layered on top of _volatile_ caches that do need to be set on stable storage for ZFS to reach it's consistency points. Entreprise Storage Arrays do not use _volatile_ caches to store data and so should ignore the request from ZFS to "flush caches". The problem is that some arrays don't. This misunderstanding between ZFS and Storage Arrays leads to underwhelming performance. Fortunately we have an easy workaround that can be used to quickly identify if this is indeed the problem : setting zfs_nocacheflush (see evil tuning guide). The best workaround here is to configure the storage with the setting to indeed ignore "cache flush". And we also have the option of tuning sd.conf on a per array basis. Refer again to the evil tuning guide for more detailed information.


NFS slow over ZFS (Not True)


This is just not generally true and often a side effect of the previous Cache flush problem. People have used storage arrays to accelerate NFS for long time but failed to see the expected gains with ZFS. Many sighting of NFS problems are traced to this.


Other sightings involve common disks with volatile caches. Here the performance delta observed are rooted in the stronger semantics that ZFS offer to this operational model. See NFS and ZFS for a more detailed description of the issue.


While I don't consider ZFS as generally slow serving NFS, we did identify in recent months a condition that effects high thread count of synchronous writes (such as a DB). This issue is fixed in the Solaris 10 Update 6 (CR 6683293).


I would encourage you to be familiar to where we stand regarding ZFS and NFS because, I know of no big gapping ZFS over NFS problems (if there were one, I think I would know). People just need to be aware that NFS is a protocol need some type of accelaration (such as NVRAM) in order to deliver a user experience close to what a direct attach filesystem provides.


ZIL is a problem (Not True)


There is a wide perception that the ZIL is the source of performance problems. This is just a naive interpretation of the facts. The ZIL serves a very fundamental component of the filesystem and does that admirably well. Disabling the synchronous semantics of a filesystem will necessarely lead to higher performance in a way that is totally misleading to the outside observer. So while we are looking at further zil improvements for large scale problems, the ZIL is just not today the source of common problems. So please don't disable this unless you know what you're getting into.


Random read from Raid-Z


Raid-Z is a great technology that allows to store blocks on top of common JBOD storage without being subject to raid-5 write hole corruption (see : http://blogs.sun.com/bonwick/entry/raid_z). However the performance characteristics of raid-z departs significantly from raid-5 as to surprise first time users. Raid-Z as currently implemented spreads blocks to the full width of the raid group and creates extra IOPS during random reading. At lower loads, the latency of operations is not impacted but sustained random read loads can suffer. However, workloads that end up with frequent cache hits will not be subject to the same penalty as workloads that access vast amount of data more uniformly. This is where one truly needs to say, "it depends".


Interestingly, the same problem does not affect Raid-Z streaming performance and won't affect workloads that commonly benefit from caching. That said both random and streaming performance are perfectible and we are looking at a number different ways to improve on this situation. To better understand Raid-Z, see one of my very first ZFS entry on this topic : Raid-Z


CPU consumption, scalability and benchmarking


This is an area we will need to make more studies. With todays very capable multicore systems, there are many workloads that won't suffer from the CPU consumptions of ZFS. Most systems do not run at 100% cpu bound (being more generally constrained by disk, networks or application scalability) and the user visible latency of operations are not strongly impacted by extra cycles spent in say the ZFS checksumming.


However, this view breaks down when it comes to system benchmarking. Many benchmarks I encounter (the most crafted ones to boot) end up as host CPU efficiency benchmarks : How many Operations can I do on this system given large amount of disk and network resources while preserving some level X of response time. The answer to this question is purely the reverse of the cycles spent per operation.


This concern is more relevant when the CPU cycles spent in managing direct attach storage and filesystem is in direct competition with cycles spent in the application. This is also why database benchmarking is often associated with using raw device, a fact must less encountered in common deployment.


Root causing scalability limits and efficiency problems is just part of the never ending performance optimisation of filesystems.


Direct I/O


Directio has been a great enabler of database performance in other filesystems. The problem for me is that Direct I/O is a group of improvements each with their own contribution to the end result. Some want the concurrent writes, some wants to avoid a copy, some wants to avoid double caching, some don't know but see performance gains when turned on (some also see a degradation). I note that concurrent writes has never been a problem in ZFS and that the extra copy used when managing a cache is generally cheap considering common DB rates of access. Acheiving greater CPU efficiency is certainly a valid goal and we need to look into what is impacting this in common DB workloads. In the mean time, ZFS in OpenSolaris got a new feature to manage the cachebility of Data in the ZFS ARC. The per filesystem "primarycache" property will allow users to decide if blocks should actually linger in the ARC cache or just be transient. This will allow DB deployed on ZFS to avoid any form of double caching that might have occured in the past.


ZFS Performance is and will be a moving target for some time in the future. Solaris 10 Update 6 with a new write throttle, will be a significant change and then Opensolaris offers additional advantages. But generally just be skeptical of any performance issue that is not root caused: the problem might not be where you expect it


mercredi mai 14, 2008

The new ZFS write throttle

A very significant improvement is coming soon to ZFS. A change that will increase the general quality of service delivered by ZFS. Interestingly it's a change that might also slow down your microbenchmark but nevertheless it's a change you should be eager for.


Write throttling


For a filesystem, write throttling designates the act of blocking application for some amount of time, as short as possible, waiting for the proper conditions to allow the write system calls to succeed. Write throttling is normally required because applications can write to memory (dirty memory pages) at a rate significantly faster than the kernel can flush the data to disk. Many workloads dirty memory pages by writing to the filesystem page cache at near memory copy speed, possibly using multiple threads issuing high-rates of filesystem writes. Concurrently, the filesystem is doing it's best to drain all that data to the disk subsystem.


Given the constraints, the time to empty the filesystem cache to disk can be longer than the time required for applications to dirty the cache. Even if one considers storage with fast NVRAM, under sustained load, that NVRAM will fill up to a point where it needs to wait for a slow disk I/O to make room for more data to get in.


When committing data to a filesystem in bursts, it can be quite desirable to push the data at memory speed and then drain the cache to disk during the lapses between bursts. But when data is generated at a sustained high rate, lack of throttling leads to total memory depletion. We thus need at some point to try and match the application data rate with that of the I/O subsystem. This is the primary goal of write throttling.


A secondary goal of write throttling is to prevent massive data loss. When applications do not manage I/O synchronization (i.e don't use O_DSYNC and fsync), data ends up cached in the filesystem and the contract is that there is no guarantee that the data will still be there if a system crash were to occur. So even if the filesystem cannot be blamed for such data loss, it is still a nice feature to help prevent such massive losses.


Case in point : UFS Write throttling


For instance UFS would use the fsflush daemon to try to keep data exposed for no more than 30 seconds (default value of autoup). Also, UFS would keep track of the amount of I/O outstanding for each file. Once too much I/O was pending, UFS would throttle writers for that file. This was controlled through ufs_HW, ufs_LW and their values were commonly tuned (a bad sign). Eventually old defaults values were updated and seem to work nicely today. UFS write throttling thus operates on a per file basis. While there are some merits to this approach, it can be defeated as it does not manage the imbalance between memory and disks at a system level.


ZFS Previous write throttling


ZFS is designed around the concept of transaction groups (txg). Normally, every 5 seconds an _open_ txg goes to the quiesced state. From that state the quiesced txg will go to the syncing state which sends dirty data to the I/O subsystem. For each pool, there are at most 1 txg in each of the 3 states, open, quiescing, syncing. Write throttling used to occur when the 5 second txg clock would fire while the syncing txg had not yet completed. The open group would wait on the quiesced one which waits on the syncing one. Application writers (write system call) would block, possibly a few seconds, waiting for a txg to open. In other words, if a txg took more than 5 seconds to sync to disk, we would globally block writers thus matching their speed with that of the I/O. But if a workload had a bursty write behavior that could be synced during the allotted 5 seconds, application would never be throttled.


The Issue


But ZFS did not sufficiently controled the amount of data that could get in an open txg. As long as the ARC cache was no more than half dirty, ZFS would accept data. For a large memory machine or one with weak storage, this was likely to cause long txg sync times. The downsides were many :


	- if we did ended up throttled, long  sync times meant the system
	behavior would be sluggish for seconds at a time.

	- long txg sync times also meant that our granularity at which 
	we could generate snapshots would be impacted.

	- we ended up with lots of pending data in the cache all of
	which could be lost in the event of a crash.

	- the ZFS I/O scheduler which prioritizes operations was also
	negatively impacted.	

	- By  not    throttling we had the possibility that
	sequential writes on large files  could displace from the ARC
	a very large number of smaller objects. Refilling
	that data  meant  very  large number of  disk I/Os.  
	Not throttling can  paradoxically  end up as  very
	costly for performance.

	- the previous code also could at times, not be issuing I/Os
	to disk for seconds even though the workload was
	critically dependant of storage speed.

	- And foremost, lack of throttling depleted memory and prevented
	ZFS from reacting to memory pressure.
That ZFS is considered a memory hog is most likely the results of the the previous throttling code. Once a proper solution is in place, it will be interesting to see if we behave better on that front.


The Solutions


The new code keeps track of the amount of data accepted in a TXG and the time it takes to sync. It dynamically adjusts that amount so that each TXG sync takes about 5 seconds (txg_time variable). It also clamps the limit to no more than 1/8th of physical memory.


And to avoid the system wide and seconds long throttle effect, the new code will detect when we are dangerously close to that situation (7/8th of the limit) and will insert 1 tick delays for applications issuing writes. This prevents a write intensive thread from hogging the available space starving out other threads. This delay should also generally prevent the system wide throttle.


So the new steady state behavior of write intensive workloads is that, starting with an empty TXG, all threads will be allowed to dirty memory at full speed until a first threshold of bytes in the TXG is reached. At that time, every write system call will be delayed by 1 tick thus significantly slowing down the pace of writes. If the previous TXG completes it's I/Os, then the current TXG will then be allowed to resume at full speed. But in the unlikely event that a workload, despite the per write 1-tick delay, manages to fill up the TXG up to the full threshold we will be forced to throttle all writes in order to allow the storage to catch up.


It should make the system much better behaved and generally more performant under sustained write stress.


If you are owner of an unlucky workload that ends up as slowed by more throttling, do consider the other benefits that you get from the new code. If that does not compensate for the loss, get in touch and tell us what your needs are on that front.


About

user13278091

Search

Categories
Archives
« avril 2014
lun.mar.mer.jeu.ven.sam.dim.
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
    
       
Today
News
Blogroll

No bookmarks in folder