No Free Lunches
By hendel on Jun 06, 2007
Today's motif is the nonexistence of free lunches. To dig into these allegations I turn to my brother, whose answers make more sense than my questions, and to the Internet. Metaphorically a Free Lunch is getting something for nothing. The expression goes back to a time of taverns, where drinking patrons were enticed with free food, hence the expression -no free lunch- meant to expose hidden costs, hidden in tavern drink costs.
“E finita la cuccagna!” is an Italian variant, as proclaimed at his inauguration by Fiorello La Guardia, the Mayor after whom a New York airport is named. His “no-more-free-lunches” was a call against government graft. I don't know about government, but the proclamation still echoes at La Guardia airport where you can hardly get a free or even a cheap meal.
There is no such thing as a free lunch was popularized by Milton Friedman, yet paradoxically Asset Diversification is seen as a case of a free lunch in the risk-reward tradeoff. Either Milton was not aware of this investment panacea, or financial planners are colluding to sell us unnecessary positions.
The laws of Thermodynamics dismiss the odds of free lunches in the physical world, but in spite of roadblocks like the No Free Lunch Theorems, there is little deterrence against the pursuit of free lunches in the world of information and optimization. People serve hard time for violating laws not theorems. Here my brother weighs in categorically. A better solution to a problem just means that the previous solution was sub-optimal. Sleep 14 hours every other day and then try 7 hours every day. You may feel better rested for the same proportion of sleep, but calling the new sleep regime a free lunch does not make any formal sense. Overwhelmed by analogies I retreat into this compromise:
A bona fide free lunch must be repeatable.
Not just a few free meals here and there, but a systematic way of repeating a win-win optimization. Let's be cautious that repeatability not become a demand for a perpetuum mobile or an infinite nutrition source since the USPTO has stopped granting Perpetual Motion patents without a working prototype. A free lunch is something you can base your finite diet on.
After buying drinks for a couple of rounds the net infrastructure is hungry for free lunches. It craves new services, higher ARPUs, lower OPEX, and a lower power footprint. What else is new in the world of slideware. Mobile devices and laptops on the left, some boxes on the right, and the proverbial cloud in the middle. Services flow from right to left, revenues go the other way. Psychiatrists whose patients see internets in Rorschach tests are perplexed, the cloud slideware has permeated everything. Me, I see gateways. Every computer inside or attached to the cloud transforms and moves stuff between its interfaces. I see complex and stateful transformations in the cloud, everything in that cloud is a gateway and I am not crazy doctor, even the puny devices on the left are gateways, doctor, with humans on one side and the net on the other.
For IP convergence these packet gateways need the intelligence to straddle heterogeneous protocols at various layers. When built out of traditional processors, gateways exhibit a brain vs. muscle tradeoff. More complex packet processing means lower throughput. As carriers add services they lower the gateway throughput (or burden per subscriber cost). This work-throughput tradeoff is like carrying water with a bucket. For me it has a negative slope when plotted against distance, but can be flattened if I summon enough muscular friends. By swinging buckets from one to the other we can maintain the water rate to any distance. This is repeatable, so the Bucket Brigade may represent a free lunch until I run out of idle friends with buckets and the graph slopes down.
Similarly for gateways, we can use threads to pipeline packet processing. More complex processing inserts more stages in this “packet brigade” without sacrificing throughput. Wishing low thread to thread communications costs means that the threads are ideally packed inside a processor, and the densest general purpose pool of vertical threads would be a CMT processor like Sun's upcoming Niagara 2. If said processor happens to have a couple of built in 10G network pipes to get the "water" in and out, there you have your gateway engine. The rest is just a simple matter of programming...
Seriously, the software angles deserve their own blog entry. Promised. Today we just point out that a neat way of parallelizing execution is actually serial (i.e. pipelined). In a thread rich future this is neat because it is a free lunch: increased computational work at constant throughput, repeatable. It works by tapping your idle friends (CMT threads), which incidentally can be summoned for a packet brigade using the just released Logical Domains (check out http://www.sun.com/ldoms).