Saturday Oct 10, 2009

Infiniband Performance Limits: Streaming Disk Read and new Summary

 Updated Performance Limit Summary

I was able to squeak out a few more bytes/second in the streaming DRAM test for IPoIB and have achieved a respectable upper bound for RDMA streaming disk reads for this Sun Storage 7410 configuration.  The updated summary is below with links to the relevant Analytics screenshots.  I'll update this summary as I gather more data.


RDMA IPoIB
 NFSv3 Streaming DRAM Read
2.93 GBytes/second \*\*
~ 2.40 GBytes/second\*
 NFSv3 Streaming Disk Read
2.11 GBytes/second \*\*
1.47 GBytes/second \*
 NFSv3 Streaming Write
984 MBytes/Second \*\*
752 MBytes/second \*
 NFSv3 Max IOPS - 1 byte reads


 NFSv3 Max IOPS - 4k reads


 NFSv3 Max IOPS - 8K reads


    \* IPoIB

    The IPoIB numbers do not represent the maximum limits I expect to ultimately achieve.  On the 7410, we are well under resource utilization for CPU and disk.  In the I/O path, we are no where close to saturating the IB transport and the hypertransport and PCIe root complexes have plenty of head room.  The problem is the number of clients.  As I develop a better client fabric, expect these values to change.

    \*\* RDMA

    With NFSv3/RDMA, I am able to hit maximum limits with the current client configuration (10 clients).  Except, that is, max IOPS.  In the streaming read from DRAM test , I was able to hit the limit imposed by the PCIe generation 1 root complexes and downstream bus.  For the streaming read/write from/to disk, I am able to reach the maximum we can expect from this storage configuration. The throughput numbers are given in GBytes/second for the transport.   While throughput numbers observed on the subnet manager were higher, I took a conservative approach to reporting streaming write and DRAM read limits.  For this test, I used the IOPS and multiplied by the data transfer size (128K).  For example, we see 24041 (iops) x 128K (read size) = 3.00 GBytes/second for the streaming read frm DRAM test.  Once we have 64-bit port performance counters, I can be more confident in the throughput I observed through them.  For streaming read from disk, I used the reported disk throughput.

    Fabric Configuration

    Filer: Sun Storage 7410, with the following config:

    • 256 Gbytes DRAM
    • 8 JBODs, each with 24 x 1 Tbyte disks, configured with mirroring
    • 4 sockets of six-core AMD Opteron 2600 MHz CPUs (Istanbul)
    • 2 Sun DDR Dual Port Infiniband HCA
    • 3  HBA cards
    • noatime on shares, and database size left at 128 Kbytes
    Clients: 10 blades, each:
    • 2 sockets of Intel Xeon quad-core 1600 MHz CPUs
    • 3 Gbytes of DRAM
    • 1 Sun DDR Dual Port Infiniband HCA Express Module
    • mount options:
      • read tests: mounted forcedirectio (to skip client caching), and rsize to match the workload
      • write tests: default mount options

    Switches: 2 internal Sun DataCenter 3x24 Infiniband switches (A and C)

      Subnet manager:

      • Centos 5.2
      • Sun HPC Software, Linux Edition
      • 2 Sun DDR Dual Port Infiniband HCA

      NFSv3 Streaming Disk Reads

      I was able to achieve a maximum read limit for NFSv3 streaming read from disk for RDMA.  As with my previous tests, I have a 10 client fabric connected to the same Sun Storage 7410.  The clients are split equally between two subnets and connected to two separate HCA ports on the 7410.  Each client has a separate share mounted.  For the read from disk tests, I'm using all 10 clients each running 10 threads to read 1 MB of data (see Brendan's seqread.pl script) from its own 2GB file.  The shares are mounted with rsize=128K.


      Update on Maximum IOPS

      I'm still waiting to run this set of tests with a larger number of clients.  But in the interim, I wanted to make sure that adding those clients would indeed push me to the limits of the 7410.  To validate my thinking, I ran a step test for the 4k maximum IOPS test.  Here, we can see the stepwise function of adding two clients at a time plus one at the end for a maximum of 9 clients.

      We're scaling nicely: every two clients adds roughly 42000 IOPS per step and the last client adds another 20000.  We're starting to reach a CPU limit but if I add just 5 more clients, I can match Brendan's IOP max of 400K.  I think I can do it!  Stay tuned...

      Tuesday Sep 22, 2009

      New Image for Old Blog

      Yesterday, I posted the wrong image for the sequential read experiment.  That's been corrected and the words now match the image. :)

      Infiniband for Q3.2009

      The lastest release of the SS7000 software includes support for Infiniband HCAs.  Each controller may be configured with a Sun Dual Port Quad Rate HCA (Sun option x4237) in designated slots.  The slot configuration varies by product with up to three HCAs on the Sun Storage 7410.  The initial Infiniband upper level protcols (ULP) include IPoIB and early adopter access for NFS/RDMA.  The same file and block based data protocols (NFS, CIFS, FTP, SFTP, HTTP/WebDav, and iSCSI) we support over ethernet are also supported over the IPoIB ULP.  OpenSolaris, Linux, and Windows clients with support for the OpenFabrics Enterprise Distribution (OFED 1.2, OFED 1.3, and OFED 1.4) have been tested and validated for IPoIB. NFS/RDMA is offered for early adopters of the technology for Linux distributions that run with the 2.6.30 kernel and greater.

      Infiniband Configuration


      Infiniband IPoIB datalink partition and IP interface configuration is easy and painless using the same network BUI page or CLI contexts as ethernet. Port GUID information is available for configured partitions on the network page as shown below. This makes it easy to add SS7000 HCA ports to a partition table on a subnet manager.  Once a port has been added to a partition on the subnet manager, the IPoIB device will automatically appear in the network configuration.  At this point, the device may be used to configure partition datalinks and then interfaces.  If desired, IP network multi-pathing (IPMP) can be employed by creating multi-pathing groups for the IPoIB interfaces.

      HCA and port GUID and status information may also be found on the hardware slot location.  Navigate to the Maintenance->Hardware->Slots for the controller and click on the slot information icon to get see firmware, GUID, status and speeds associated with the HCA and ports.


      Performance Preview

      So how does Infiniband perform in the SS7000 family?  Well, it really depends upon the workload and a adequately configured system.  Here, I'll demonstrate two simple workloads on a base SS7410.

      Server

      • Sun Storage 7410
      • Software release Q3.2009
      • 2 x quad core AMD Opteron 2.3 GHz CPUs
      • 64GBytes DRAM
      • 1 JBOD (23 disk, each 750 GB) configured for mirroring
      • 2 logzillas configured for striping
      • 2 Sun Dual Port DDR Infiniband HCA, one port each configured across two separate subnets

      Clients

      • 8 x blade servers, each containing:
      • 2 x quad core Intel Xeon 1.6 GHz CPUs
      • 3GBystes DRAM
      • 1 Sun Dual Port Infiniband HCA EM
      • Filesystems mounted using NFSv4, forcedirectio
      • Solaris Nevada build 118
      • 8 Solaris Nevada (build 118) clients: 4 clients connected to subnet 1 and 4 clients connected to subnet 2

      Fabric

      • Sun 3x24 Infiniband Data Switch, switches 0 (subnet 1) and 2 (subnet 2) configured across server and clients
      • 2 OFED 1.4 OpenSM subnet managers operating as masters for switch 1 and 2

      The SS7410 is really under-powered (2 CPUs, 64G memory, 1 JBOD, 2 logzillas, DDR HCAs) and no where near its operational limitations. 

      Sequential Reads

      In this experiment, I used a total of 8 clients with up to 5 threads each performing sequential reads from a 10GB file in a slightly modified version of Brendan's seqread.pl script.  The clients are evenly assigned to each of the HCA ports.  More than 5 threads per-client did not yield any significant gain as I hit the maximum amount I could get from the CPU.  I ran the experiment twice: once for NFSv4 over IPoIB and once for NFSv4/RDMA. As expected, IPoIB yields better results with smaller block sizes but I was surprised to see IPoIB outperform NFS/RDMA with 64K transfer block sizes and stay in the running with every size in between.

      I'm using the default quality of service (QOS) on the subnet manager and clients that are evenly assigned to each of the HCA ports.  As a result, we can see a nice even distribution of network throughput across each of the devices and IOPS per-client. 

      Synchronous Writes

      In the read experiment, I was able to hit an upper bound on CPU utilization at about 8 clients x 5 threads.  What will it take to reach a maximum limit for synchronous writes?  To help answer that question, I'll use a stepping approach to the single synchronous write experiment above.  Looping through my 8 clients at one minute intervals, I'll add a 4K synchronous write thread every second until the number of IOPS levels.  At about 10 threads per client, we start to see the the number of IOPS reach a maximum.  This time CPU utilization is below its maximum (35%) but latency turns into a lake-effect ice storm.  We eventually top out at 38961 write IOPS for our 80 client threads.


      As a sanity check, I also captured the per-device network throughput.  If I account for the additional NFSv4 operations and packet overhead, 93.1MB/sec seems reasonable.  I ran this experiment with NFS/RDMA and discovered a marked drop-off (30%) in IOPS when run for a long period.  Until then, NFS/RDMA performed as well as IPoIB.  Something to investigate.

      Next

      I have a baseline for my woefully underpowered SS7410.  For sequential reads, I quickly bumped into CPU utilization limits at  40 client threads.  With the synchronous write workload, I top out 38691 IOPS due to increased disk latency.  But all is not lost, the SS7410 is far from its configurable hardware limitations.  The next round of experiments will include:

      • Buff up my 7410: give it two more CPUs and double the memory to help with reads
      • Add more JBODS and logzillas to help with writes
      • Configure system into a QDR fabric to help the overall throughput

      Tuesday Sep 08, 2009

      Better Late, Than Never

      I've been remiss in posting and completely missed reporting on a couple of new Q2 features for the Fishworks software stack. In Q2, we introduced three new secure data protocols: HTTPS, SFTP, and FTPS.  Dave Pacheco covers HTTPS in his blog so I'll highlight SFTP and FTPS here.

      FTPS

      Our FTP server is built from the proFTPD server software stack.  In Q2, we updated the server to version 1.3.2 to take in a number of critical bug fixes and add support for FTP over SSL/TLS (FTPS).  The proFTPD server implements FTP over SSL/TLS in accordance with the FTP Security Extensions defined by RFC 2228.  Not all FTP clients support the FTP security extensions but a list of clients that do may be found here.

      Enabling FTPS on a Fishworks appliance is very simple.  From the FTP service configuration BUI page or CLI context, an administrator may optionally select to turn on FTPS for the default port or an alternate port.  If FTPS is enabled for a port, the FTP server will use TLSv1 for its authentication and data channels. 

      SFTP

      The SSH File Transfer Protocol (SFTP) is a network protocol that provides file transfer over SSH. SFTP is a protocol designed by the IETF SECSH working group.  SFTP does not itself provide authentication and security but rather delegates this to the underlying protocol. For example, the SFTP server used on the SS7000 is implemented as subsystem of OpenSSH software suite.  The SFTP server is responsible for interpreting and implementing the SFTP command set but authentication and securing the communication channels over which the server transfers data is the responsibility of the underlying OpenSSH software.  SFTP should not be confused with:

      • FTP over SSL/TLS (FTPS)
      • FTP over SSH
      • Simple File Transfer Protocol
      • Secure Copy (SCP)

      Configuration of the SFTP service is very similar to SSH.  The default port for SFTP is set to 218.  This port was selected as it does not conflict with any other ports in a Fishworks appliance and does not interfere with SSH communication for administration (port 22).  As with FTP and HTTP, shares may be exported for SFTP access by selecting read-only or read-write access from the Shares->Protocol BUI page or CLI context.

      Battle of the SSL All-Stars

      If you're an administrator pondering which secure protocol (HTTPS, FTPS, or SFTP) to choose, you're main consideration will be client support.  Not all clients support all protocols.  FTPS is limited in client adoption and the SFTP IETF specification has yet to be finalized.  Secondary to client support will likely be performance.  Using a simple file upload workload of a 10GB file, we can easily compare the three protocols.  All three protocols use OpenSSL for encryption and decryption, so we would expect each protocol to be impacted pretty much the same for secure transfers.  In the following image, we see from top to bottom, the raw network throughput for SFTP, FTPS, and HTTPS.


      To be fair to FTPS and HTTPS, I used curl(1) and the native version of sftp(1) on the Solaris client (the Solaris version of curl did not support the SFTP protocol).  Even so, HTTPS transfer rates are clearly lagging at almost 50% of FTPS.

      Not surprising, CPU utilization increases by as much as 50% on the client and 10% on the server for a FTPS upload as compared to its non-SSL counterpart.    Thanks to Bryan and the new FTP (and SFTP) analytics, we can see the difference in FTP (top pane) vs FTPS (bottom pane) data rates.  FTPS can be as much as 84% slower than FTP.  Ouch!

      Your mileage may vary with your workload but its nice to have the tools at-hand to get a accurate assessment of each protocol and SSL implementations.

      Tuesday Dec 02, 2008

      A Visual Look at Fishworks

      So what does a project like Fishworks look like? 

      I put together a visual representation of the Fishworks project using Code Swarm that has been captured in a video. The video shows how the Fishworks team and project evolved based on changes made to the source code over the course of two and half years.  The code swarm tool uses organic visualization techniques to model the history of a project based on source code files and their relationship to the developers that create and modify them.  It's a very cool tool and a bit addictive.

      There are a number of code swarm project visualizations available online.  The OpenSolaris and Image Packaging System (IPS) projects have been represented by Code Swarm based on raw commit data made to  each project. The OpenSolaris Code Swarm pulses as a single blob of source as developers come and go within the orbit of a vast code base.  In contrast, the Fishworks project shows well-defined orbits surrounding each developer.  This is a testament to the almost constant activity of a small number of developers on a well-partitioned source base.  I have elided gate re-synchronizations to better represent the project and the contributions of each developer.  This avoids single bursts of activity by what seems to be one developer as seen in the IPS Code Swarm.


      Fishworks CodeSwarm from John Danielson on Vimeo

      Code Swarm runs natively in Subversion and Mercurial repostitories.  The Fishworks project source base was controlled by SCCS with logs created by Teamware.  I converted the Teamware 'putback' logs to the Code Swarm input XML format .  I do wonder how the visualization would change if I accounted for lines of code changed per file.  I might be able to use Eric's code tracking script to generate suitable input.

      In the meantime, enjoy the show.

      Cindi
      About

      cindi

      Search

      Top Tags
      Categories
      Archives
      « April 2014
      SunMonTueWedThuFriSat
        
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
         
             
      Today