Russian Dolls

How much information entropy does my Thursday evening phone call have? Very little. It's Thursday evening. She knows I want her nod for the usual beers after soccer. She senses I am already at the bar; fait accompli. Not much entropy in her response either. I may get a rare yet firm “No” if I forgot a school open house, or God forbid, an anniversary. With proper attention to calendar minutia, the entire phone call would carry zero information, the answer would always be “Yes”. With my fallible memory it carries one bit of information. The Grant or Deny bit. I don't squander any more bits on apologies over the phone. Love is never having to say I am sorry in front of ten implacable teammates at a bar.

But that permission bit takes much infrastructure and overhead. Digitizing her voice at say eight kilosamples per second, packetizing the cell phone air interface traffic, and establishing the call from the bar through signaling protocols. Oh, plus billing record updates (the constitution does not codify free phone speech anymore than free beer). Thousands of bytes exchanged to settle if I go home before or after a cold pitcher. Overhead paid for the flexibility to call anybody, the flexibility to carry digital information, and the flexibility of layered modular architectures that adapt to new uses with contained changes.

Like Russian dolls one inside the other, and the last tiny one is her Yes or No bit.



We may contemplate the dolls individually, like engineers trained to think horizontally within the layer of interest. But it is tempting once in a while to reflect vertically about multiple layers. Open the dolls one by one. A Russian doll introspection of sorts.

The layered paths of high speed networks mimic these Russian dolls. Serial packet switched wire protocols have become a bit parallel lately as transceivers use multiple lanes (10 Gigabit Ethernet XAUI for example), and the outer doll, the traditionally parallel buses that host network interfaces have lately become kind of serial, like PCI-Ex using packets over serially abstracted lanes. In an actual system these dolls are tortuously encapsulated as data traverses the network interface, the I/O interface, and terminates in a really wide system memory, whose physical interface may paradoxically be serially packetized (a la FBDIMM).

My usual reflection when thinking across layers is awe. Surprise that this complex tangle works reliably or works at all. But coldly thought, the complexity is ostensibly an artifact of the modularity that ultimately simplifies each layer, so that we humans can get them right. One doll at a time.

A doll we have been crafting and fitting is codenamed Neptune (how original). It is an interface device attaching servers to 10 Gigabit networks. Neptune may soon get another name to avoid upsetting some trademark lawyer somewhere in the solar system, an official, original, and hard to remember name. But to me she will always be Neptune.

So far our systems were mostly attached to Gigabit networks, and as these systems get more powerful they don't deserve to be on Gigabit networks anymore. Visualize 10 Gigabits per second as a bigger door into and out of a server. Curiously big doors are useful at both ends of the housing market: Monster Homes and Affordable Housing. Monster homes are big systems deployed for raw performance and a specific purpose; everything is big about them not just doors. Affordable housing are the systems aimed at accommodating multiple subscribers at minimal cost per subscriber. Multi-tenancy of subscribers if you will. Their simultaneous traffic needs also require big 10 Gigabit doors. These systems are a natural fit for CMT processor architectures, but that is a different story.

(Listen to Bob Sellinger for the story that ties CMT processors, subscribers, and economics in his "Getting Ready for 4G" webcast, by clicking his link at the bottom. Later though, you are reading about dolls now).

Neptune is more than a big door into a server, at least two doors. Two 10 Gigabit Ethernet ports because most infrastructure deployments are dual homed for redundancy.

Neptune does the serial-packets-to-memory-to-packets again when mediating between 10 Gigabit networks and PCI-Ex interfaces. And it uses every trick in the book to minimize the impact of such byzantine path. It tackles the affordable housing problem where tenants have separate corridors to their apartment units, and in the process Neptune also solves the traditional network receive scalability problem. Traffic is segregated into separate internal and separate system resources, ultimately targeting different threads/cores to service different traffic components.

Where is the novelty? Up until now we used to first queue packets into the server, and then classify them for the purposes of distributing traffic up the stack. Neptune first classifies and then queues, and that makes all the difference. Now we can have asymmetrical resource usage models, we can apply policy, we can virtualize, and of course eliminate the nasty head-of-the-line-blocking introduced by first queuing and then sorting things out.

Some Neptune uses are already in place, some are in development, and some are just sketched on napkins (blog drafts maybe?). Traffic spreading is already built into the Neptune device drivers, for example. Extending the reach of multiple container and virtualization technologies all the way into the network interface is a progression that spans Solaris containers, the upcoming Crossbow project in OpenSolaris, and Logical Domains, just to name a few of our Russian dolls. Besides these examples Neptune can and will fit into other dolls in various markets and communities beyond Sun products.

With its CMT lineage, Neptune matches processors with high degrees of threading and concurrency, so much so that we even put a mini-Neptune inside the upcoming Niagara 2 processor, conjuring the familiar metaphor of a doll inside a doll.


LINKS GALORE:

Neptune adapter

Niagara 2

Yes, you should listen to Sellinger now



RELATED BLOGS:


Crossbow & Neptune, by Markus Flierl

Simon Bullen Networking Blog


PODCAST:


Hal Stern talks about Neptune with Muller, Nordmark, and Saulsbury




[ Technorati: NiagaraCMT, ]

Comments:

[Trackback] The last few years, we relabeled leading brands of network adapters and selled them for our system. Now we have developed an own NIC chip again. Why? Because we wanted to make some things in a different way. It�s a nic zu augument our massive multithre...

Posted by c0t0d0s0.org on February 19, 2007 at 07:12 PM PST #

Post a Comment:
Comments are closed for this entry.
About

hendel

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today