The Ecology of Computing

First things first. I've been really, really bad at keeping up this blog. Well, I've been really, really busy. But that excuse has gotten pretty thin, and even my mom has been sending me email with titles like "Time for a new blog!" Got it. So here's what's been on my mind recently.

We are at an inflection point of a massive build-out of computing: things like Google's mondo datacenters, Microsoft's response around live.com, Amazon.com forays into public grids, Salesforce.com AppExchange, are only a small sample. These are examples of folks who are seriously under-served by Moore's Law --- their appetites are only served through an absolute growth in the total amount of silicon dedicated to their causes. Indeed, industry ecosystems are dependent upon absolute growth of unit volume, and not just the growth in single system performance. More detail on this in a subsequent blog. Here, I'm going to focus upon a major consequence of this growth.

The turn of the millennium roughly coincided with modern computing's 50th anniversary and I frequently got the question of "what will computing look like in 2050?". That's either really easy or impossible to answer. Extrapolating Moore's Law, for example, is lots of fun: 25 doubling times, conservatively, is thirty-something-million times the "transistors" per "chip" ("spin-states per cc" will be more like it). Here's how to picture it: imagine Google's entire million-CPU grid of today in your mobile phone of 2050...

Simply extrapolating Moore's Law doesn't account for the absolute growth in computing stuff. So my answer then, and even more emphatic today, is that computing in 2050 will be dominated by two concerns, ecology and ethics: what are the environmental and sociological consequences of computing something. It's not what can we compute, but what should we responsibly compute? You have guessed right if you are imagining a very confused interviewer politely smiling at me. "Will there still be Microsoft and Dell?"

What is the ecology of computing? Will the 2050 Google Grid pocket work at the energy of a self-winding watch, or will it, on current trend lines, require a million amp supply at 0.1V (hey, that's only 100KW). And more to the here-and-now, will the main go-forward consideration in siting computing be the access to, and cost of, power? Just about every customer I speak with today has some sort of physical computing issue: they are either maxed out on space, cooling capacity, or power feeds --- and frequently all three. The people who take this very seriously do LP optimizations of building costs, power, labor, bandwidth, latency, taxes, permits and climate.

More than cost, people cite the time it takes to plan, get approval for, and build out a new datacenter. Any one of reasonable size involves some serious negotiation with local utilities, not to mention getting signed permits from your friendly municipalities. Running out of datacenter capacity can be a Bad Thing. And not surprisingly, the product marketing du jour for those of us in the business of supplying infrastructure has space and power as a first order requirement. Low power design is not new, of course. One only needs to look at the attention that goes into a modern mobile phone to get a great appreciation for the extraordinary lengths that engineers go to conserve microwatts.

What is new, or at least newly emphasized, is how to maximize the equation of a server: maximum throughput per watt. Some of the low power circuit tricks will apply, and we'll all get very clever with powering back under less-than-fully-utilized conditions. There is also the basic blocking and tackling of power supply conversion efficiencies and low-impedance air flow. I'm also expecting a lot more "co-design" of the heat-generating parts (servers, storage and switches) with the heat-removal parts (fans, heat exchangers, chillers). The first signs of this are rack-level enclosures with built-in heat exchange. There is also some simple, and effective, things to do with active damping of airflows within datacenters. This is just the beginning, and my advice is to Watch This Space.

Naturally, we feel really good about the fundamental advance of our throughput computing initiative of the past five years --- especially, the UltraSPARC T1 (nee Niagara 1), and its 65nm follow-on, Niagara 2. By focusing on throughput on parallel workloads, in contrast to performance on a single thread, one is led to a design space that is far more energy efficient. We frequently see an order-of-magnitude (base 10!) improvement in both raw performance and performance-per-watt with the T1. Indeed, it's so significant that last month PG&E are offering our customers rebates of up to $1000 for customers who replace less efficient existing systems. So, yeah, as engineers it feels really good to innovate like this --- something like building a bullet train that gets 100 miles per gallon. Kicking butt, responsibly. This is just the start. Watch This Space, too.

This is great stuff, but let's take a big step back and ask "how much power must computing take?". What are the fundamental limits of energy efficiency, and are we getting even close to them? Theoretically speaking, computation (and communication) can be accomplished, in the limit, with zero energy. No, this isn't some violation of the Laws of Thermodynamics. It turns out that the only real energy dissipation (literally, entropy increase) required is when a bit is destroyed (logical states decrease). Rolf Landauer , one of the great IBM Fellows, was the first to convincingly prove this, along with a bunch of other fundamental limits to computation and communication.

If you don't destroy bits --- meaning you build reversible or adiabatic logic systems --- then you get a hall pass to the Second Law. You hold the number of logical states constant, so it's not inevitable that you have to increase physical entropy. Richard Feynman was a huge proponent of these kinds of systems, and there has been some steady, but marked, progress in the area. See the University of Florida group , for example. If you are interested, check out some of the pioneering work of Fredkin, Toffoli, Knight, Younis and Frank (apologies to others, these are just some of the works that I've followed).

Of course, even the most aggressive researchers in the area will caution about a free lunch --- we have to draw a system boundary somewhere, and crossing that boundary will cost something. Even so, it's likely that we are off 100x or 1000x where we could be in energy efficiency. My guess is that we'll look back at today's "modern" systems to be about as efficient and ecologically responsible as we would now view the first coal-fired steam locomotives.

Back to reality. We are making huge strides in the ecology of computing, but by many measures we are still being hugely wasteful. My bet is that the next few years will be ones that fundamentally re-define how we view large-scale computing infrastructure. Did I forget to mention to Watch This Space?

Love you, mom.

Comments:

Post a Comment:
Comments are closed for this entry.
About

Gregp

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today