Winners and Losers

Jonathan has an excellent summary of the big picture transition from customize to standardize to utilize in ZNET. While we are still all looking for the right vocabulary, a key point is that you know that you are doing utility computing if you "utilize it" (okay, a bit circular). Importantly, you know you \*aren't\* doing it if you are "customizing it". Why the distinction is critical is that true utility computing enjoys a scale economy: the marginal cost of supplying an additional user declines with scale. And, if things are really efficient, economically speaking, prices will approach the marginal cost (yes, driving average profits to zero).

So, who wins and who loses in this scenario? I get this question frequently, and it's well-raised by Dan Farber and David Berlind , so let me respond to that here. But first, it's important to grok the what it actually will entail to build a secure, scalable, and economically viable utility: an enormous amount of careful systems engineering. Even Dan points out that these network-scale service providers will have complex infrastructure, and that is absolutely correct. This isn't a reduction or simplification in what it takes to build out scalable systems, and in fact apparent complexity might increase in a number of dimensions. By analogy, we can agree that electricity is a commodity in developed marketplaces, but the design of safe, clean and efficient power plants isn't. Oil is a commodity, but there is enormous technological differentiation and capital investment in deep-water drilling and production platforms. The reason is that small changes in the efficiencies of power plants or deep-water platforms can have enormous economic benefits because those small changes are being multiplied by the commodity flow itself. Scale matters.

Again, computing utilities will be internally complex, and there will be big opportunities for those who learn to drive efficiency at scale. But here's the crux of the change: what you do to make a multi-tenant utility work can be very different than what you would do to make an enterprise work well. What's important is different. In an enterprise setting, there is the complexity of the heterogeneity of both the base platforms and the layered applications. Typically, very few applications get to the scale where serious optimization of its efficiency and manageability become the leading terms. Instead, capital expenses, consolidation, and worst-case capacity planning tend to dominate.

Conversely, if you are attempting to make a network-scale service run well for 1000's of customers or millions of end-users there are different sensibilities. Typical patterns are increased homogeneity at the lower layers, with significant attention paid to scale efficiencies around management, power, and other operating expenses.

So who wins? That will follow the "what's important" reasoning. If the customer in your mind is the typical enterprise, and you are optimizing for things like heterogeneity, server consolidation, and outsourcing, then you aren't likely doing something that will have direct appeal for someone trying to build a scalable utility. And, as far as that is concerned, the big issues I see there (read: opportunities) at the physical layers are things like networking -- specifically the "backplane" that interconnects the components server and storage elements --- and issues around power consumption, huge amounts of DRAM, etc. At the logical layers are all of the layered abstractions I blogged about last time, but also truly effective systems management that worry about time- and space-division multiplexing of resources against service level objectives.

Who loses? I'll leave that as an exercise to the reader.

Comments:

Post a Comment:
Comments are closed for this entry.
About

Gregp

Search

Archives
« March 2015
SunMonTueWedThuFriSat
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
    
       
Today