Niagara - Designed for Network Throughput

Niagara - Designed for Network throughput


We finally announce Niagara based servers to the public! Billed as the low cost, energy efficient, huge network throughput processors - marketing mumbo jumbo you think?? Well, try it and you will see. I was priviledged enough that one of the earliest prototype landed on my desk (or in my lab to be precise) so Solaris networking could be tailored to take advantage of the chip. And boy, together with Solaris, this thing rocks!!

So you know that Niagara is multi core, multi threaded chip and Solaris takes advantage in multiple way. Let me highlight some of them.

Network performance

The load from the NIC is fanned out to multiple soft rings in the GLDv3 layer based on the src IP address and port information. Each soft ring in turn is tied to a Niagara thread and a Vertical Perimeter  such that packets from a connection have locality to specific H/W thread on a core and the NIC has locality to specific core. Think of this model as 4 H/W threads per core processing the NIC such that if one thread stalls for resource, the CPU cycles are not wasted. The result is amazing network performance for this beast. Performs 5-6 times the performance of your typical x86 based CPU.

Virtualization

Imagine you are a ISP or someone wanting to consolidate multiple machines on one physical machine. Well, Niagara based platforms lends themselves beautifully to this concept because there are so many H/W threads around which appear as individual CPUs to Solaris. We have a project underway called  Crossbow (details available on Network Community page on OpenSolaris) which will allow you to carve the machine (create virtual network stacks) into multiple virtual machines and tied specific CPUs to them and control the B/W utilization for each virtual machine on a shared NIC.

Real Time Networking/Offload

With GLDv3 based drivers and FireEngine architecture in Solaris 10, the stack controls the rate of interrupts and can dynamically switch the NIC between interrupt and polling mode. Couple with Niagara platform, Solaris can run the entire networking stack on one core and provide real time capabilities to the application. Meanwhile, the application them selves run on different core without worrying about networking interrupts pinning them down. You can get pretty bounded latencies provided application can do some admission control. We are also planning to hide the core running networking from the application effectively getting TOE for free without suffering from the drawbacks of offloading networking to a spearate piece of hardware.


[ T: ]

Comments:

My understanding is that the T2000 uses e1000g controllers, which are still dlpi based, so they wouldn't (yet) get the advantages of Nemo (GLDv3). I wouldn't be surprised if e1000g is converted to Nemo in Nevada, but it's not yet done in S10.

Posted by Derek Morr on December 06, 2005 at 10:33 AM PST #

Post a Comment:
Comments are closed for this entry.
About

Sunay Tripathi, Sun Distinguished Engineer, Solaris Core OS, writes a weblog on architecture for Solaris Networking Stack, GLDv3 (Nemo) framework, Crossbow Network Virtualization and related things

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today
Blogroll
News

No bookmarks in folder

Solaris Networking: Magic Revealed

No bookmarks in folder

solaris networking