Computing As We Know It

Last week I met with 450 of the most important people in the world, people who have the most profound effect on our long-term future -- our world's educators -- at Sun's annual Worldwide education and research conference. I feel a really deep bond with this group, not only having come from that world myself, but also being at a company that grew directly from a university (the "U" in SUN stands for University). To address these folks --- more than half this year from outside the United States --- is always a privilege and an honor.

This year, I spoke on two topics. I want to elaborate a bit on the first one, which was roughly about the fundamental shift for the model of software development from shrink-wrapped and packaged software (the Microsoft and SAP models --- sell bits) to software as a network service (the Google and models --- sell cycles). As an industry, and as researchers, we have barely begun to wake-up to the manifold implications of this shift. I'm not one given to hyperbole, but Computing As We Know It is changing, irreversibly I suspect.

At the core of the shrink-wrap and packaged software model is taking a software idea and expressing it as a program that it is certified to run on widely deployed stacks of operating systems (and now, increasingly,middleware) on instruction set architectures. You then try to sell as many copies of those bits as you can. Note that I carefully said "certified to run on.” The dirty secret of the whole software and systems industry is that the historically rich margins are largely supported by the extraordinary switching costs of moving an application from one platform to another. And it's NOT about openness or portability of source code, it's about proving (through insane amounts of testing) that a \*binary\* does what you think it does.

[Major Flamage...The fact that we feel compelled to regression test binaries against the entire stack of middleware, OS, and firmware versions is a massive breakage of abstraction. We write in high-level languages with reasonably clean interface (API) abstractions, but collapse the entire layered cake when we go to binaries. Put it on the list of great failures of computer science.

Yes, virtual machines, the JVM's and CLR's of the world, would seem to ease this condition somewhat, but again, we distribute the apps as byte code binaries, and while we should trust Write Once Run Anywhere, middleware providers are relentlessly clever in embracing and extending core platforms to keep those switching costs healthy. ...End Flamage]

Here's the critical issue: when you deliver your software as a network service, these switching costs are totally hidden from the user of the service. I mean, do you really know which OS your phone's mobile base station is running? (Okay, \*some\* of us do know...) So, the whole concept of stable OS/Processor stacks that have defined the computing industry for the past three decades is about to be seriously whacked.

Take Google. The best understanding is that it is a homegrown BSD derivative on not-leading-edge velcro-mounted x86 motherboards. The only binary standard that the developers at Google care about is the one that they cook up for their own use.

I suspect that just about every software startup shaking the trees on Sand Hill road (for the VCs aren't still hiding under their desks) fancy themselves as the next Google. Does this mean that every new startup will bake their own computing farm?

Left to their own devices, perhaps. But something much more likely will happen, and this is the "Aha". New startups would better served focusing on their idea and relying upon the emergence standardized farms --- computing utilities --- to provide an on-demand substrate. This is the seed idea behind the Sun Grid $1/cpu-hr and $1/gig-mo offerings. These, however, are raw cycles with simple O/S-process and network filesystem abstractions. On top of which new deployment containers, on top of which new network applications will be constructed.

Aha! A \*new\* stack (and ecosystem around it) will emerge. I think it will loosely follow a diagram similar to this one:

The base layer is what I'll call the "Utility" comprising raw cpu-hours and gigabyte-months. These are the commodities of computational energy and remembering stuff.

The next layer are deployment "Containers". Today, we think of most grids as OS-specific process containers, but I'm betting this will rapidly evolve into more robust abstractions such as J2EE and Jini. Similarly, storage containers will comprise things like relational databases and fixed content stores.

The next layer is likely to have its own internal structure, but I'll lump it all together as "Application Services". It's the thousands of services that will be created around things like search, email, CRM, gaming, ERP, and on and on.

Finally, I'll bet that this will give rise to whole new "Application Networks" that aggregate and stitch together the network applications to create the whole fabric of completely re-factored enterprise IT, or perhaps new styles of businesses entirely. Somebody is going to make a lot of money on this layer.

It's a much longer discussion about how this new stack will change the balance of power in the computing industry. Just suffice to say what's important will change and a whole bunch of stuff will follow.

Which brings me back to my good friends in academia. I'm looking to you help navigate this world, and to really understand what will be important, and to understand where innovation can and should take place. To start, we are going to make available a million cpu-hrs and hundred terabyte-months.


And honestly, just a few years ago, I thought computing was getting boring. And now I can't wait to witness the industry morph over this decade.


Post a Comment:
Comments are closed for this entry.



« July 2016