Perforce is a commercial source code management software available on many platforms including Solaris Sparc and Solaris x86. As one of our ISV partner experienced last year, the storage subsystem is likely the performance bottleneck of a Perforce Server installation. With entry-level systems featuring a minimum of 4 CPU cores, 4GB memory and dual GigE network nowadays --I am looking e.g. at the base configuration of the Sun Fire X2250 server--, CPU, memory and network are no longer bottlenecks. Unless you run Windows and are subject to the 2GB limit of addressable memory in Windows --Solaris will, by the way, happily let 32-bit applications allocate most of the theoretical 4GB of addressable memory, and you can go beyond that with the 64-bit Perforce Solaris binaries.
So our partner was experiencing poor performance and blocking situations under high load (280 users, 16M files, 15 server instances) with Perforce Server 2006.2 when running off a Sparc V440 server, Solaris 10 and UFS filesystem --with logging enabled. Benchmark results at the 2003 Perforce User Conference had already pointed out the low performance of UFS with Perforce, where synchronous directory updates are a key factor of performance. Linux performs better because it executes directory updates asynchronously --at the risk of data loss, of course. Our partner rather wanted to run off a more reliable Solaris Sparc server --the source code repository and management system is the number one mission-critical application in a software house--, tuned the kernel parameter segmap_percent to 80 but that yields very limited gains in Solaris 10. At that point, we offered to test ZFS, the novel filesystem introduced in Solaris 10.
We benchmarked Perforce Server (P4D) 2007.2 on a Sun Fire X4200 server and Sun StorageTek 3510 (RAID 1+0) array to compare UFS and ZFS under the Perforce branch-and-submit scenario. Under ZFS, I/O rates improved from 30 to 125 MB/s for read and 125 to 150 MB/s for write, with a ZFS record size of 128K (
zfs set recsize=128k), the maximum value currently supported by ZFS. We attribute the gains to the more aggressive caching mechanism --the ZFS ARC cache attempts to use most of the physical memory to cache filesystem data-- and prefetch facility --prefetch allows to detect sequential or strided access to a file and issue I/O ahead of phase-- of ZFS. Benchmarks on a Sparc Sun Fire V890 server showed comparable improvements. Figures 1 and 2 below highlight the effectiveness of the ZFS ARC cache: UFS resorts to physical disk access more often, leading to the application stalls that our partner experienced.
On the application side, the improved I/O rates translated into a peak
commit rate of about 12K files-per-second, in line with the Linux
numbers but this time with no risk of data loss, ZFS being a pioneering high-available filesystem. This project was another
cool proofpoint that, with ZFS, you no longer have to chose between fast, reliable and cheap! As a side note, dtrace --the revolutionary dynamic instrumentation framework introduced in Solaris 10-- was heavily used for collecting and analyzing I/O patterns and caching. We notably used dtrace scripts from the DTraceToolkit and this filesystem cache dtrace script.
Going forward, another level of performance should be reached with L2ARC and flash-memory SSD storage. This is an exciting development of OpenStorage at Sun which adds an additional layer in the storage hierarchy --in a fully transparent fashion to the user thanks to ZFS-- that fits nicely between main memory and disks as shown in Table 1 below.
|RAM||sub-microsecond||10's of gigabytes|
|SSD||sub-millisecond||100's of gigabytes|
This technology was first productized in the Sun Storage 7000 Unified Storage Systems a.k.a. AmberRoad, available in Sun's Try-n-Buy program. L2ARC increased by 8.3x the number of random IOPS across a 500 GB dataset in an NFSv3 test. What could it do for your own application? Why don't you try it out for yourself? Free trial runs for 60 days.