Niagara - Designed for Network Throughput

Guest Author


Niagara - Designed for Network throughput

We finally announce Niagara based servers to the public! Billed as the
low cost, energy efficient, huge network throughput processors -
marketing mumbo jumbo you think?? Well, try it and you will see. I was
priviledged enough that one of the earliest prototype landed on my desk
(or in my lab to be precise) so Solaris networking could be tailored to
take advantage of the chip. And boy, together with Solaris, this thing

So you know that Niagara is multi core, multi threaded chip and Solaris
takes advantage in multiple way. Let me highlight some of them.

Network performance

The load from the NIC is fanned out to multiple soft rings in the href="http://blogs.sun.com/roller/page/sunay?entry=the_solaris_networking_the_magic#mozTocId767708">GLDv3
layer based on the src IP address and port information. Each soft ring
in turn is tied to a Niagara thread and a href="http://blogs.sun.com/roller/page/sunay?entry=solaris_networking_the_magic_revealed#mozTocId533719">Vertical
Perimeter  such that packets from a connection have locality
to specific H/W thread on a core and the NIC has locality to specific
core. Think of this model as 4 H/W threads per core processing the NIC
such that if one thread stalls for resource, the CPU cycles are not
wasted. The result is amazing network performance for this beast.
Performs 5-6 times the performance of your typical x86 based CPU.


Imagine you are a ISP or someone wanting to consolidate multiple
machines on one physical machine. Well, Niagara based platforms lends
themselves beautifully to this concept because there are so many H/W
threads around which appear as individual CPUs to Solaris. We have a
project underway called  href="http://www.opensolaris.org/os/community/networking/crossbow_sunlabs_ext.pdf">Crossbow
(details available on href="http://www.opensolaris.org/os/community/networking/">Network
Community page on OpenSolaris) which will allow you to carve the
machine (create virtual network stacks) into multiple virtual machines
and tied specific CPUs to them and control the B/W utilization for each
virtual machine on a shared NIC.

Real Time Networking/Offload

With href="http://blogs.sun.com/roller/page/sunay?entry=the_solaris_networking_the_magic#mozTocId767708">GLDv3
based drivers and href="http://blogs.sun.com/roller/page/sunay?entry=solaris_networking_the_magic_revealed#mozTocId592593.25">FireEngine
architecture in Solaris 10, the stack controls the rate of interrupts
and can dynamically switch the NIC between interrupt and polling mode.
Couple with Niagara platform, Solaris can run the entire networking
stack on one core and provide real time capabilities to the
application. Meanwhile, the application them selves run on different
core without worrying about networking interrupts pinning them down.
You can get pretty bounded latencies provided application can do some
admission control. We are also planning to hide the core running
networking from the application effectively getting TOE for free
without suffering from the drawbacks of offloading networking to a
spearate piece of hardware.

[ T:


Join the discussion

Comments ( 1 )
  • Derek Morr Tuesday, December 6, 2005
    My understanding is that the T2000 uses e1000g controllers, which are still dlpi based, so they wouldn't (yet) get the advantages of Nemo (GLDv3). I wouldn't be surprised if e1000g is converted to Nemo in Nevada, but it's not yet done in S10.
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.