SPEC SFS over the years -or- How Fast Can That NFS Server Go?

March 18, 1993.


That is the birth date of the SPECsSFS93 Benchmark. Born from the desire for a server-side NFS benchmark, the LADDIS(\*) workload or benchmark was built into the industry standard that it is today. The last major update to the benchmark was in 1998 and included NFSv3 support, new workloads, and other improvements.


For the benchmark's release date, the top performing result at that time was from Auspex; the NS 5000 NetServer to be exact. You won't find the results available at SPEC's site since they were in the middle of a transition at that point but I happen to still have a hard copy of the results. The Auspex system achieved a whopping 1703 ops/sec at a average response time of 49.4 ms. Yes, that is right, 49.4 milli-seconds. Now, remember this was 1993 and the state of the art network being used for these results was non-switched 10 Mbit ethernet. One also has to remember that these results are for NFS version 2. But still, what a blast from the past. As I remember there was even a big discussion of the cap for the response time; should it be 50ms or 70ms. And today? Over the last two years, there doesn't appear to be one result with a peak response time greater than 10ms (with many below 6ms).


Why all of the nostalgia about SFS results from 13 years ago? Well, Netapp crossed a milestone for the SFS community recently with the release of results for their Data ONTAP GX System (24-node FAS6070). With this result, an NFS server under test has broken the 1 million ops/sec mark (1032461 to be exact).


We are all aware of the speed with which the industry moves but occasionally it is events or milestones like these that put things into perspective. We often fall into the endless cycle of worrying about performance and improvements and don't step back to see what we have been able to achieve. The mere presence of the benchmark has been a major factor in the continual focus on NFS server improvement. The NFS industry has moved from 10Mbit to 10Gbit ethernet. NFSv2 has been generally pushed to the far edges of use by NFSv3. And watch the score board for NFSv4 (with NFSv4.1 not far behind) as it will move NFSv3 to the edge of use. Average response times for servers have dropped by a factor of 10. And I almost forgot the other interesting debate that occurred when SFS93 was being released; should results require that UDP checksums be enabled. Seems quite silly today given that we have moved beyond UDP usage. Today TCP is the norm for new SFS results (with the exception of one stubborn vendor).


So congratulations to Netapp for their achievement and to the NFS server community in general for its continual movement forward. Of course, the customer is the one that has benefitted the most over the years.


Slipping back into that desire for continual improvement, I do need to point out that the NFS industry still lacks a client side benchmark. An area that sorely needs the magnifying glass of benchmark results to see the same types of improvements that have been made at the server. There is also the issue of limited workloads for the SFS benchmark. Ah well, back to the todo list. And... Yeah NFS!


(\*) Bonus points go to first person to post the companies that initially contributed to LADDIS and hence the acronym; and it isn't fair to read the paper first.

Comments:

legato auspex dec dg interphase sun

Posted by albert on August 21, 2006 at 07:27 AM CDT #

Excellent answer! :-)

Posted by guest on August 21, 2006 at 07:33 AM CDT #

"Today TCP is the norm for new SFS results"
In all my SPEC SFS benchmarks UDP (see them here and here) gave me better results than TCP.
That's why I am going to use the UDP in my production NFS cluster.
Do you think it's wrong?

Posted by Leon Koll on August 22, 2006 at 05:32 PM CDT #

Post a Comment:
Comments are closed for this entry.
About

shepler

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today