Sunday May 17, 2009

QDR Magnum infiniband Switch

During the sc08 Sun did showcase the new QDR Magnum infiniband Switch
QDR Magnum infibiband Switch


The SB6048 also introduce QDR NEM QNEM

  • 30 ports of 4x QDR InfiniBand and 24 ports of GbE

  • Up to 6:1 cable reduction as compared to competitive systems

  • Scalability up to 5,000+ node clusters


Sun PCIe Dual port QDR IB HCA and DDR FEM
QDR HCA, DDR FEM

General observation on QNEM:


  • Function as leaf switch

  • bases for 3 D torus IB Fibre

  • with other core switch one can build 3 stages or 5 stages or 7 stages IB fibre etc

Monday Sep 27, 2004

Infiniband in Solaris 10

Solaris 10 introduce support for Infiniband devices

System Adminstration Guide: Devices and File Ssytem

Chapter 9 Using Infiniband Devices

  • Overview of Infiniband Devices
  • Dynamically Reconfiguration IB Devices With cfgadm
  • Using the uDAPL Application Interface With Infiniband Devices
Solaris include tavor driver operates on Mellanox MT23108 Infiniband ASSP.
  • This ASSP supports the link and physical layers of the IB specification, while the ASSP and the driver support the transport layer.
  • The tavor driver interfaces with the IB Transport Framework (IBTF) and provides an implementation of the Channel Interfaces. It also enables management applications and agents to access the IB fabric.
  • Infiniband IPoIB device driver: ibd, provides basice support for the IBA Unreliable Datagram Queue Pair HW.
  • SCSI RDMA Protocal Driver:ibsrp , supports mechanisms for performing SCSI-3 transactions ove an IB-based RDMA transport
  • IB Transport Layer : ibtl, adhers IBA 1.1, PI for IB Channel Interface and IB Transport Interface
The main and first major application for IB in Solaris 10 is the NFSv4 support over RDMA.

In Solaris 10 release, if RDMA for Infiniband is available, RDMA is the default transport for NFS.

nfs
Other potential applications:

  • SunCluster for oracle 9iRAC and 10g
  • MPI application over IB
There was a Infiniband Summit HW IHV 2000 that contain many presentations.

Sunday Sep 26, 2004

Infiniband Switches

Recently, to replay some RFP, I have to investage offers from many different vendors.

I summarize my finding in this weblog:

  1. Mellanox:
    • InfiniScale II MT43132 chip, 8 4x ports, 240 Gbps ,foundation for many IB switches
    • InfiniScale III MT47396 chip, 24 4x ports, 480 Gbps ,foundation for many IB switches
    • InfiniHost dual port 4x HCA/TCA, support 128MB, 256MB or 512GB memory, 8 Data VLs and VL15 mgmt
    • MTS2400, 24 4x port,1U, InfiniScale III based<
    • MTS14400, 144 4x port,10U, InfiniScale III based
    • MTS9600, 96 4xport, 7U, InfiniScale based
  2. topspin:
    • TS-90 , 12 4x port plus one dual port FC-GW or 6 ported IP-GW, 1U
    • TS-120, 24 4x port , 1U
    • TS-360, 24 4x port,4U, plus upto 12 modules for FC-GW or IP-GW
    • TS-170, 96 4x port, 7U,
    • TS-270, 96 4xport , 6U
    • HCA, 125MB RAM,
  3. Infinicon:
    • InfinIO9000, 288 4x port , 7U for 12 cards,14U for 24 cards
    • infinIO5000, 12 4x port ,1U
    • InfinIO7000, 60 4x ort, 1U
    • InfinIO3000, 32 4x port , 1U
    • InfiniServ HCA, 256 MB, dual port 4x
  4. Voltaire:
    • ISR9288, 288 4x ports,14U, 3 blade drawer, 2 port GE, 4 port FC
    • ISR9024, 24 4x port, 1U
    • ISR6000, 1U, 3 slots, upto 3 6 4x IB, 2 2ge-GW, 2 4-FC ports
Most those products are using the silicon from Mellanox.

Using 24 port silicon to build an non-blocking switches, in 3 layers setup, one can get:

  • 48 ports: 6 chips, 6 links between chips, 2 cores and 4 leafs
  • 72 ports: 9 chips, 4 links between chips, 3 cores and 6 leafs
  • 96 ports: 12 chips, 3 links between chips, 4 cores and 8 leafs
  • 144 ports: 18 chips, 2 links between chips, 6 cores and 12 leafs
  • 288 ports: 48 chips, 1 link between chips, 12 cores and 24 leafs
To form a HPC clusters, one can use the two ports of HCA to have a HA configuration of two core switches.

The connection from the leaf switches to the core switchs will determines the blocking factor e.g. for 1 24 port switch,

  • 8 uplink and 16 downlink is consider 50% blocking.
  • 12 uplink and 12 downlink is consider 0% blocking
  • 4 uplink and 20 downlink is consider 80% bloacking
  • 8 uplink and 12 downlink is 25% blocking
About

hstsao

Search

Top Tags
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today