Thursday Oct 13, 2005

How the New iPod will lead to their self-extinction (eventually!)

From a techie's vantage point, the "new" iPod is a pretty simple hack; just add a video codec to the software mix on the device. 'Course it's essential that the display is good enough, that there is sufficient processing oomph to get decent frame rate, and that the batteries can handle the constant display activity. Storage wasn't ever the issue. Just some software.

[Aside: I was one of the co-founders of PictureTel, where I designed the signal processing hardware architecture. We ended up doing a completely custom bit-sliced designed, and ganged seven large-pizza-box-sized boards to build a complete codec. The whole thing was the size of mini-refrigerator. Twenty years later, it's now "just some software" on a consumer device...]

Lots of folks are going to wag about (lack of) content, screen size, etc. No doubt, all important to making this more than a hack and a real market success.

But I'm going to assume that it will be. What are the consequences? One of them will be the reinforcement on a proprietary distribution network and rights management system. That's a whole other topic. See what Jonathan says about why we have created the Open Media Commons.

Another will be around broadband demand. And that leads to a very interesting longer term question about the best way to connect displays to the Internet. The line of reasoning starts with a simple observation that network bandwidth the (machine synchronizing with) iPod will have a first-order effect on usability. The first-order analysis is pretty simple. The H.264 video codec is running at 750Kbps, plus another 128Kbps of AAC audio. Call it a megabit per second. This is a somewhat low bit rate video-quality-wise (3 or 4Mbps is a better number to keep in your head for DVD-level quality). The saving grace is the relatively small-sized screen (just about CIF).

So, if you have what we call today "broadband" (1.5 to 4 Mbps), then you can do the math. You might get 2x or 3x faster than real time for your download. Meaning, you'll wait 20 minutes to download an hour of video. Kinda best case, actually.

Your experience would be a whole lot better if you had, say, a 25Mbps or 100Mbps network connection. In that case it's just a minute or two for the download. There are many cable and wireline (and fixed wireless) operators who are aiming for this level of residential bandwidth. The build-out is a bit of a leap of faith. Who will possibly need all of that bandwidth? Without a lot of imagination the business case has it filled up with broadcast video content combined with network video-on-demand.

But, you wanna refresh your shiny New iPod with fresh content? Now that will suck down a lot of bandwidth. Let me emphasize "refresh". I, for one, can listen to a piece of music many many times. But with the exception of a few films (say, Dr. Strangelove or the Blues Brothers) once is enough.

Perhaps offline video play is the killer app for online bandwidth?

Perhaps. Though let's think it through some more. The whole reason to store content on a portable player is because of the lumpiness of bandwidth: both it's pervasiveness and it's service level. You want to carry it with you because you want to listen to (or watch) content in places where you don't have good enough networking to playout directly online. But, if you did, I'd claim that the interest in explicitly managing your device, or even taking it around with you --- fashion statement aside --- will go down significantly.

Why? Lots and lots of reasons that I won't belabor here because it gets to be a surprisingly emotional issue. I'll deflect a bit and speculate that for every person who wants to "own" their content and carry it around with them (in the form of personal player or personal computer), there is someone like me who wants a to get access to my stuff no matter where I am or whatever device I pickup or use. That's ownership, too, just more abstract. Call it my Personal Network. I, for one, don't want the bother of syncing stuff or worrying about them getting lost, etc.

An even bigger reason is that pervasive networking gets you access to entire connected world, especially interactive and participative content. The truly interesting stuff will be the result of new realtime communities that emerge and the fusion of services that intermediate them.

I'll bet that we will look back of this era of quasi-networking and wince, "How did we ever live that way?" And the idea of wanting to carry all of your content with you will seem both old-fashioned and rather ridiculous.

Here's my prediction: a successful portable video player (perhaps the iPod is it) will be THE big driver for significant increases in bandwidth and connectivity. Those increases, over time, will ultimately undermine the utility of "carry your stuff with you" personal devices and lead to the next era of real-time connected personal network.

Wednesday Sep 14, 2005

Rest in Peace, Lew

It was just a week ago, a bright Wednesday afternoon, that we walked in the parking lot together after the SETI board meeting, you rushing off to your next meeting. "I'm really sorry", you said. "We'll definitely get together soon and finish the conversation." And then I got the terrible news Friday morning. We won't complete that talk, but I've got something important for you to hear. I need you to hear what you've meant to me. To thank you for your keen insight, encouragement, friendship and mentoring you have given me over the past couple of years. Even in the middle of all the events at Boeing, you spent the time and energy with us at the Institute. There is a huge smile on my face remembering our trip up to Hat Creek in Jack's plane, tromping around the observatory, and trying to invent an on-the-fly wind vibration solution. And I always cherish the primer on Italian wines you sketched out for me on the back of a wine list.

But more than anything, I'd like you know what a truly caring, compassionate human being you've been. A Fortune 100 CEO who drives his own car, and stops on mountain roads to help others fix their flats. You are a Good Soul, my friend.

Rest in Peace, Lew Platt.

Thursday Jul 21, 2005

An Unwavering Business Model of Innovation

I have a special place in my heart for HP; my UCSD bachelor's degree in-hand, they gave me my first full-time engineering job in their San Diego Division (at that time, pioneering the paper-moving plotters and early ink-jet printers). It has been sad, though, watching the erosion of an "engineer's engineering" company with an innovation- first culture into something that looks, to this CTO, like it has spent the last five years chasing other people's business models. That hasn't happened here at Sun. And Scott and Jonathan have the Wall Street scars to show you what it means to stay focused and true to the long-term and \*central\* value of R&D. These days, you can really feel the buzz at Sun that the unwavering dedication we've had on innovation and \*challenging conventional wisdom\* is paying off big- time: Solaris, Java, throughput computing, developer tools, storage virtualization, proximity I/O, scalable x64 servers, application switching, real time, display-over-IP, utility computing, ...

So, an open invitation to my good friends at HP Labs who are looking for a place that deeply values and respects open innovation: drop me a line at greg@sun.com.

Tuesday Jul 19, 2005

Innovate, Create – and Compensate!

Here is a piece I co-wrote with Susan Landau around the Grokster case. (Susan Landau is a Distinguished Engineer in Sun Labs and the co-author of 'Privacy on the Line: the Politics of Wiretapping and Encryption,' MIT Press (1998)).

The Internet is a place of innovation, creation, and communication – and we should do our best to keep it that way. Nothing in the Grokster decision said otherwise. The Supreme Court simply said that you better not base your business model on copyright infringement. It wisely avoided creating some kind of test for underlying technologies.

Twenty years ago the Court ruled in the Sony Betamax decision that devices capable of substantial non-infringing uses were legal, even if such devices could be used in copyright violations. As Justice Breyer wrote in a concurring opinion in the Grokster case, ``There may be other now unforeseen non-infringing uses that develop for peer-to-peer software, just as the home-video rental industry (unmentioned in Sony) developed for the VCR.'' His point – stopping technologies when they are young and evolving could kill off great promise and benefits that lie down the road. That is why the Court specifically focused on bad behavior while leaving the old Sony standard alone. Exactly right --- don't constrain the technology; constrain bad actors.

Innovation has flourished, and this country has reaped the rewards, because Internet technologies enable the rapid, widespread, and often anonymous flow of information. Combine that free flow with advances in digital media -- photography, video, music -- and you have an amazing opportunity for wide-scale experimentation and creative expression.

Just think: Two decades ago, home computers brought us a revolution called desktop publishing. Now home users have the tools to create professional-quality movies and music -- and a way to share them with others.

Lately, though, the Internet has become a place of conflict and contention. Why? Because people are worried about what happens to content that carries a copyright. If it's easy to copy and transmit, how can we make sure artists are compensated, as they should be, for their creative work?

Just as important, how can we do so without quashing experimentation and innovation?

Artists should be compensated. There's no question about that. But in our rush to defend their rights, let's not overlook the second question. We believe public policy should encourage innovation and free-speech. It should, as always, seek to balance the rights of individuals with the greatest public good. As Justice Breyer noted, “copyright's basic objective is creation and its revenue objectives but a means to that end.” (That's why, for instance, copyright protection doesn't last forever. More is gained in the long run from sharing.)

One of the great values of the Internet is that it has become a forum for borrowing, mixing, developing, and tinkering. After all, in both science and art, innovators build on each other's work. In the words of director Martin Scorsese, ``The greater truth is that everything -- every painting, every movie, every play, every song -- comes out of something that precedes it.... It's endlessly old and endlessly new at the same time.'' We must not make the mistake of entrenching the endlessly old at the expense of the endlessly new.

So the developing discipline of digital rights management, or DRM, needs to respect experimental, standing-on-the-shoulders-of-giants aspects of the Internet. DRM technology should be designed to respect legitimate needs and current rights of honest users (including backups, format changes, excerpting, and so on).

While the Internet certainly makes managing the rights for movies and music more complex, we believe that it is sounder economic and social policy to foster the architectural, business, political, and public freedoms that have enabled the Internet to be a place of innovation than it is to overly restrict the flow of digital information in an effort to meticulously account for every instance of the use of content.

What's more, the free flow of information is fundamental to democracy. In the shift to new forms of media and communication, neither technology nor law should limit the public's rightful access to information. Again, if we can go back to the intellectual well one more time, Justice Breyer rightly acknowledged that “the copyright laws are not intended to discourage or to control the emergence of new technologies, including (perhaps especially) those that help disseminate information and ideas more broadly or more efficiently.” Very true. So where do we go from here? We think there is a broad set of solutions in which the rights of content creators can be balanced with the common public interest in order to foster vibrant innovation. To that end, we'd like to propose the following principles of digital rights management:

- Innovation flourishes through openness -- open standards, reference architectures, and implementations.

- All creators are users and many users are creators.

- Content creators and holders of copyright should be compensated fairly.

- Respect for users' privacy is essential.

- Code (both laws and technology) should encourage innovation.

Some content owners are pressing for DRM systems that would fully control the users' access to content, systems with user tracking that limit access to copyrighted material. We instead prefer an "optimistic" model whose fundamental credo is 'trust the customer.' Excessive limitation not only restricts consumer rights but also potential, as such solutions strongly interfere with the creation of future works and fair use of copyrighted content.

In an ideal world, solutions should encourage information flow, including the capability for creating future works. Certainly there will always be "leakage" and illegal behavior. Where that occurs there should be diligent enforcement of owners' legitimate rights. BUT, we think it is better that solutions provide auditing and accounting paths that, while respecting privacy of honest users, also permit copying, manipulation, and playback.

Systems that encourage the user to play with digital material, to experiment, to build and create, will be a win for consumers, for technology developers, and for content producers. The Court has spoken, and it did so with restraint. Now it is up to technologists, artists, developers, users, and rightsholders to move ahead in a balanced and forward-looking manner. If we do, it will be a win for the Internet and for society.

Tuesday Jun 07, 2005

INTEL: IT'S TIME TO SHARE

Apple's decision to move to the x86 architecture will have enormous implications for ISV's. Applications will have to be re-compiled, re-tested, and re-certified, including the mother-of-all of Apple's apps: Mac OS X. "Which processor is that?" will be added to the decision tree for thousands of support techs. All of this at substantial costs and, no doubt, user confusion. Trust me on this, for our support of Solaris 10 across SPARC and x64 is a lot more than an expensive hobby. It's real work and real money.

And it's all a needless waste.

We, meaning the industry, a long long time ago should have converged on a single, completely open core instruction set architecture (ISA). Why? Because ISA's have mostly lost any differential end-user benefit, while creating a whole lot of costs. Those costs being the aforementioned ISV certification and support of different binaries. Those create artificial switching costs, which in turn creates market friction, which in the end means higher prices and less choice for consumers.

The really tragic part is that it doesn't need to be. We've long had our choice of open architectures (e.g., SPARC and MIPS), But the "tyranny of binary compatibility" has now driven even Apple to a proprietary one: Intel's.

I often get that quizzical look when I describe x86 architecture as proprietary and SPARC as open. But those are the facts. We have long since shared SPARC. We help set up SPARC International as independent body, and anyone can register with it for as many chips as they care to make for only $99. It's even an IEEE standard, and completely open and without royalties.

And x86? Go give Intel $99 and tell them that you want to stamp out millions of chips that say "Intel Architecture Compatible" on them. Tell them that all you want is to copy the ISA. You will do your own implementation with your own design team and own intellectual property. And exactly what do you think they will say? Exactly.

I'm talking about the ISA, not the implementation. The distinction is an important one.

Certainly through the mid-nineties the simplicity of RISC architectures permitted superior implementations of high performance microprocessors (just as they are today enabling comparatively more aggressive multi-core and multi-threaded microsystems such as Niagara). Just to be clear, while the differences among RISC architectures (SPARC, MIPS, Power, Alpha,...) doesn't lead to any discernible ISV or end-user benefit, the differences among their implementations can and do: the absolute and relative price, performance, reliability and power consumption. The fact that the implementations differ is all about design teams, their goals, and their execution.

Apparently, IBM did not execute well in the performance-per-watt part of the low power space. Actually, Apple must have felt very negative about IBM's low-power futures at a time when laptops are starting to exceed desktop volumes. The PowerPC architecture is just fine. IBM's engineering isn't.

In contrast, x86/x64 is truly ugly, architecturally speaking. What's great has been Intel's and, more recently, AMD's implementations. Intel did a masterful job executing the 32-bit portable Centrino and the Xeon server lines, and AMD is kicking some serious butt with the 64-bit Opteron. All of this in spite of a nasty ISA.

So, the actual ISA doesn't really matter any more computer-science-wise. Intel makes its money by making good chips, but it supports the margins on those chips by effectively limiting the competitive field. Intel is a high volume, but nonetheless proprietary, chipmaker.

One answer to all of this would be to create a Lessig-like Commons. Essentially, accept that all of society and the whole of industry is better served when ubiquitous, royalty-free, interface standards belong to everyone (e.g., TCP/IP, radio modulation, etc.).

That could have come about two ways. Ideally, it would have been a process lead by academia (particularly appropriate because, IMO, microprocessor makers have done much more harvesting of academic research than they have made contributions.)

If I'm feeling especially surly, I label the lack of a concerted academic effort to define a standard ISA and quietly allowing their research to further proprietary industry ones, as one of the Great Failures of the computer science community. Shame on all of us.

Given that it didn't happen, and we are most definitely at the point where yet another ISA would be about as exciting as a new indoor plumbing standard, it's up to us in the industry to let the proprietary ISA era fade to black.

One way we do that by simply removing the patent and copyright claims to all of our instruction set architectures. Let any design team in the world choose, royalty free, to pursue whatever architectural path they wish. To participate in the evolution of our computing commons. It's called sharing.

We've been sharing SPARC for years. I challenge Intel to share their ISA in the same way. You too, IBM.

The other way we free the market from proprietary instruction sets is through virtual machines. The byte codes of the Java Virtual Machine (JVM) are the binary of an ISA. It's just that (almost) all implementations are very efficient software layers that abstract the underlying physical machine, making its identity irrelevant. It's a huge developer benefit because they can spend most of their energy developing and verifying to the JVM independent of the actual implementation. That cuts multiplatform support costs markedly. Double bonus is that writing to Java is also significantly more productive.

Apple, imagine that most of your software and that of your primary ISV's were written in Java. Just think of how much easier your platform change would be! To help you get started we've got some terrific tools, free for the taking.

Monday May 09, 2005

How to Win a Nobel Prize in Economics

Time will tell whether last Friday's U.S. Court of Appeals "broadcast flag" ruling will stand up under further appeal (or, possibly, Congress subsequently passes a law giving the FCC explicit authority on the topic). However it turns out, it's going to raise a lot of public debate and, with hope, awareness on the whole topic of digital rights: who gets to do what with whose bits?

This debate gets unconstructively polarized by absolutism. "The creators of bits own and control them", at one pole. "The consumers of bits get to do with them what they darn well please", at the other. I seldom find either one of these positions tolerable to listen to for long because both are fundamentally flawed. And, both sides deep down know it.

Deep down, creators know that they are also consumers. All artistic endeavor builds upon what went before it. Film makers re-tell millenia-old stories, mixing in their own life experiences. Software developers recast patterns and algorithms, adding their own nuances and insights.

And, consumers know (abstractly, at least) that denying creators compensation will ultimately undermine the enterprises of entertainment and software.

The tension between creators (producers) and consumers is ultimately about economics. In reasonable marketplaces this gets sorted out with a supply-demand equilibrium set by price. As supply becomes scarce relative to demand, prices increase. As supply becomes abundant relative to demand, prices fall. Microeconomics 101.

It works because physical goods (atoms) are rivalrous. If I sell you an apple, you've got it and I don't; supply just got decremented. But it breaks down in the digital world because bits ain't. If I sell you some bits, you've got them and I do too; supply is unaffected.

The historical way of dealing with this is to tie the bits with some atoms (e.g., imprint them on a CD), and pretend like we are in a traditional economy. That worked pretty well --- if I sell you my CD, I'm deprived of it --- until it got really cheap to copy and store the bits off of the CD. I could make a copy, give you my CD, and I still enjoy the bits.

This was still okay, to a point, because it required effort and energy to redistribute the CD. But with the relentless growth in Internet access bandwidth, \*distribution\* costs are also driven to zero. The result is that just about everything we have come to depend upon in capitalist markets disintegrates.

Bearing in mind that it still costs real money to create those bits (a song, a movie, a program), but essentially zero dollars to give everyone connected a copy, well, Houston, We Have a Problem. DRM, broadcast flags, copy protection and the like are designed to "solve" this redistribution problem by making it tough to do so. Mostly, I think these attempts will fail because the tighter you try to make the system, the more they inconvenience the consumer.

What's the answer? It's neither the perfection of DRM, nor the absolution of digital property rights. I am sure that there IS aneconomic system that solves the problem. I don't know what it is, but I can speculate about its properties.

I want to start by turning this all around by thinking about the people who are LEAST able to pay for bits. The world's economically poor. That is, most of the people with whom we share this planet. If you produce some bits --- say, a program, a movie, research or a textbook --- it literally costs you nothing to share these bits with people who couldn't come even close to affording what people on the connected side of the digital divide can.

And we really have to ask ourselves, on what basis can we deny these people access to bits? Because they can't afford them? Remember, it costs you nothing to deliver copies of them.

(You could come to the conclusion we have: encourage global participation in software development by making access free, and design a license (CDDL) that stimulates diversity and the development of derived works. Solaris is free. So is OpenOffice. For those who can afford it, you can pay us for support.)

Let me make this a little easier. What if someone could pay you only a dollar for software you usually charge a hundred? Would you do it? As long as you didn't fear it being resold, of course you would. It's in your economic interest. You would have a dollar that you wouldn't otherwise have.

And it's in the world's social interest because that person is less disadvantaged, digitally speaking. They are, in fact, peers to those who could afford the hundred dollar price tag.

Both parties, creator and consumer, are better off. This is why I know there must be an economic solution. What maximizes a capitalist good also maximizes a social one.

An essential aspect of the solution is that people pay what they can afford to. There are a number of ways to get to that. One of my favorites is a kind of "subscription to everything" that was first introduced to me by Steve Ward at MIT. He originally cast this as a solution to software pricing wherein you would a pay a flat fee for all of the software you could possibly consume. Statistical sampling of the software that you actually consumed would help apportion the revenues to the software writers, much in the way that ASCAP apportions revenue for broadcasted music.

Now imagine that you extended this model to all bits. Suppose that you paid a monthly "digital tax" proportional to your income and consumed whatever your heart desired. Make copies, share freely, whatever. Statistical sampling of aggregate consumption --- it can be made anonymous --- would divide revenues among creators. In a system like this, anyone (read: everyone) can also contribute their bits to the pool, and benefit according to their popularity. As long as we are creating a Utopia, we can imagine that renumeration is somehow proportional to entropy added: you'd get very little claim on a new music playlist you assembled (the artists getting most of it), a lot of claim on a new song you just wrote and performed, and something in the middle for a remix.

The really nice aspect of this system is that copying is strongly encouraged, because the more chances someone has of consuming your bits, the better your chances are of getting some revenue from them. Advertising for demand creation is also incented. You might decide to spend money advertising in order to entice people to consume your creation.

The parts that don't work are pretty obvious. How is the subscription rate set for a person? Is it a government role? What if I choose to live my life digital-free: should I have to pay taxes at all? Finally, how do I possibly compare the consumption of a bit of software versus a bit of music or a bit of video? (It turns out that time spent consuming the content is an interesting measure.)

Again, I don't really have an answer. But I know what it's not. It's not the absolute control over a device's ability to render digital content. The answer is something (such a sampled subscription service) that makes use of the very network to solve the problem that gave rise to it in the first place. Actually, I view it all as opportunity and upside. It's about sharing and eliminating the digital divide.

If you can figure it out, you are on your way to Stockholm.

Friday Apr 08, 2005

One Year Later

I vividly remember a talk in 1998 by Eric Schmidt when asked if the Internet was being over-hyped. He responded resolutely. "No, it's under-hyped." And, he went on to explain something that we know all too well in tech; most things are over-estimated in the short term and under-estimated in the long. The more extraordinary it is,the greater the spread in expectations.

Sun and Microsoft announced a year ago April 2nd that we were going to do something close-to extraordinary: cooperate. All kinds of things were freezing over, dogs and cats starting living together, the Sox won the World Series, and Sun and Microsoft engineers started communicating. They even have pictures!

The touchstone was pretty simple to state. "Focus on our mutual customers." And their messages were clear and scarily consistent. The lack of any substantial technical cooperation over the last decade has created needless cost and complexity. Something I have come to term "gratuitous incompatibility". At the top of everyone's list is all about identity, from directory synchronization to single sign-on. Right after that are issues around systems management, virtualization, and developer productivity for web services.

The issues, by the way, are not about some missing wad of software, but about getting our stacks to interoperate. In fact, the "interoperate" message is louder than even the "standardize" one. Simply put, everyone is fed up with getting our stuff to work together in the field as An Exercise Left to the Customer.

So what have we been doing? Systematically going through the places where our stacks touch one another, and then locking the respective architects into conference rooms until they figure how the stuff is really supposed to work together. And, at times, it has been really trying for both teams. Why? Because software architects are the Artiste's of the IT industry. Often, things are more driven by taste, judgment and perspective than they are by quantifiable technical tradeoffs. Imagine having to get Gaudi and Wright to agree on a doorway between two rooms. After raging debates they will likely settle on something that looks familiar from the perspective of each room. It's likely that the technical specifications of minimum height and width were agreed upon quickly.

So goes our relationship. Just look at the places where we are sitting at the table together, with some real architectural progress having been made in the areas of identity, web services and management. (As examples: WS-Addressing, WS-Management, WS-Eventing, WS-MetadataExchange)

And, we are now to the stage of publicly committing products around these agreements. Real Soon Now.

Of course, I've made all of this sound "new", but the fact is that we've both had to deal with interoperability and standards issues with one another for a long time. For our part, it's been things from StarOffice and OpenOffice, to mail and messaging to really well-integrated to WS-I web services as a core part of J2SE. The important difference now is that we are able to do a lot more --- more efficiently and effectively --- simply because our people are talking with one another.

On a personal level, the experience has been extraordinary. Everyone always asks me what it is like to have 1:1's with Gates, and the answer is not what most people expect. He's got two sides of his personality: a smart, genuine and very approachable geek (I found that surprising) and a hard-edged business guy (not surprising). I truly enjoy our interactions when we are in geek-mode. There is broad common ground on where things are going, what are problems with getting there, and why we need a relentless focus upon innovation. And let's just say I feel differently when his biz-mode kicks in. 'Nuf said.

The culture is also hard-working, and Gates surrounds himself with some very smart folks (I've had the pleasure of working with the likes of John Shewchuk and Andrew Layman). At an IQ level, it feels like Sun. People are passionate about what they do and have the grey matter to back it up.

All this being said, there is this deeply entrenched fundamental difference in perspective. Microsoft tries to perfect the closed system. And by this I mean in the "closure" sense, not in the interfaces sense. Longhorn is fashioned to be self-consistent, meaning that when all of the pieces come together, they are designed to create this uber-architected interlocking machine. The primary value is that it all works together. It's integration.

We (Sun) try to perfect the open one, meaning that we try to maximize the value of innovation that occurs elsewhere. It's not the way the pieces interlock, but the way that the new ideas of others layer. It's inherently messier --- less Cathedral and more Bazaar --- but, in our belief system, more robust future-proof-wise and more inclusive and participative (e.g., the JCP and Liberty Alliance ). Get updated on Jini if you want to see the quintessence of a wide-open system focused upon Change as the Constant. Just enough architecture.

Once we set the emotional issues aside, it's actually pretty straight-forward to frame interoperability with the Microsoft architecture. There is a lot of it to be sure, but at a pragmatic level, it's simply another (important) set of protocols over which we federate.

What should you expect from the relationship going forward? Each of us will defend our approach and we'll compete like crazy. I'm a big believer in innovation networks and the perils of over-architecting, and (obviously) won't pull punches on the topic. But, shame on us both if we don't truly reduce the complexity of developing, deploying and assuring network services.

Go Sox.

Friday Mar 11, 2005

Winners and Losers

Jonathan has an excellent summary of the big picture transition from customize to standardize to utilize in ZNET. While we are still all looking for the right vocabulary, a key point is that you know that you are doing utility computing if you "utilize it" (okay, a bit circular). Importantly, you know you \*aren't\* doing it if you are "customizing it". Why the distinction is critical is that true utility computing enjoys a scale economy: the marginal cost of supplying an additional user declines with scale. And, if things are really efficient, economically speaking, prices will approach the marginal cost (yes, driving average profits to zero).

So, who wins and who loses in this scenario? I get this question frequently, and it's well-raised by Dan Farber and David Berlind , so let me respond to that here. But first, it's important to grok the what it actually will entail to build a secure, scalable, and economically viable utility: an enormous amount of careful systems engineering. Even Dan points out that these network-scale service providers will have complex infrastructure, and that is absolutely correct. This isn't a reduction or simplification in what it takes to build out scalable systems, and in fact apparent complexity might increase in a number of dimensions. By analogy, we can agree that electricity is a commodity in developed marketplaces, but the design of safe, clean and efficient power plants isn't. Oil is a commodity, but there is enormous technological differentiation and capital investment in deep-water drilling and production platforms. The reason is that small changes in the efficiencies of power plants or deep-water platforms can have enormous economic benefits because those small changes are being multiplied by the commodity flow itself. Scale matters.

Again, computing utilities will be internally complex, and there will be big opportunities for those who learn to drive efficiency at scale. But here's the crux of the change: what you do to make a multi-tenant utility work can be very different than what you would do to make an enterprise work well. What's important is different. In an enterprise setting, there is the complexity of the heterogeneity of both the base platforms and the layered applications. Typically, very few applications get to the scale where serious optimization of its efficiency and manageability become the leading terms. Instead, capital expenses, consolidation, and worst-case capacity planning tend to dominate.

Conversely, if you are attempting to make a network-scale service run well for 1000's of customers or millions of end-users there are different sensibilities. Typical patterns are increased homogeneity at the lower layers, with significant attention paid to scale efficiencies around management, power, and other operating expenses.

So who wins? That will follow the "what's important" reasoning. If the customer in your mind is the typical enterprise, and you are optimizing for things like heterogeneity, server consolidation, and outsourcing, then you aren't likely doing something that will have direct appeal for someone trying to build a scalable utility. And, as far as that is concerned, the big issues I see there (read: opportunities) at the physical layers are things like networking -- specifically the "backplane" that interconnects the components server and storage elements --- and issues around power consumption, huge amounts of DRAM, etc. At the logical layers are all of the layered abstractions I blogged about last time, but also truly effective systems management that worry about time- and space-division multiplexing of resources against service level objectives.

Who loses? I'll leave that as an exercise to the reader.

Thursday Feb 24, 2005

Computing As We Know It

Last week I met with 450 of the most important people in the world, people who have the most profound effect on our long-term future -- our world's educators -- at Sun's annual Worldwide education and research conference. I feel a really deep bond with this group, not only having come from that world myself, but also being at a company that grew directly from a university (the "U" in SUN stands for University). To address these folks --- more than half this year from outside the United States --- is always a privilege and an honor.

This year, I spoke on two topics. I want to elaborate a bit on the first one, which was roughly about the fundamental shift for the model of software development from shrink-wrapped and packaged software (the Microsoft and SAP models --- sell bits) to software as a network service (the Google and Salesforce.com models --- sell cycles). As an industry, and as researchers, we have barely begun to wake-up to the manifold implications of this shift. I'm not one given to hyperbole, but Computing As We Know It is changing, irreversibly I suspect.

At the core of the shrink-wrap and packaged software model is taking a software idea and expressing it as a program that it is certified to run on widely deployed stacks of operating systems (and now, increasingly,middleware) on instruction set architectures. You then try to sell as many copies of those bits as you can. Note that I carefully said "certified to run on.” The dirty secret of the whole software and systems industry is that the historically rich margins are largely supported by the extraordinary switching costs of moving an application from one platform to another. And it's NOT about openness or portability of source code, it's about proving (through insane amounts of testing) that a \*binary\* does what you think it does.

[Major Flamage...The fact that we feel compelled to regression test binaries against the entire stack of middleware, OS, and firmware versions is a massive breakage of abstraction. We write in high-level languages with reasonably clean interface (API) abstractions, but collapse the entire layered cake when we go to binaries. Put it on the list of great failures of computer science.

Yes, virtual machines, the JVM's and CLR's of the world, would seem to ease this condition somewhat, but again, we distribute the apps as byte code binaries, and while we should trust Write Once Run Anywhere, middleware providers are relentlessly clever in embracing and extending core platforms to keep those switching costs healthy. ...End Flamage]

Here's the critical issue: when you deliver your software as a network service, these switching costs are totally hidden from the user of the service. I mean, do you really know which OS your phone's mobile base station is running? (Okay, \*some\* of us do know...) So, the whole concept of stable OS/Processor stacks that have defined the computing industry for the past three decades is about to be seriously whacked.

Take Google. The best understanding is that it is a homegrown BSD derivative on not-leading-edge velcro-mounted x86 motherboards. The only binary standard that the developers at Google care about is the one that they cook up for their own use.

I suspect that just about every software startup shaking the trees on Sand Hill road (for the VCs aren't still hiding under their desks) fancy themselves as the next Google. Does this mean that every new startup will bake their own computing farm?

Left to their own devices, perhaps. But something much more likely will happen, and this is the "Aha". New startups would better served focusing on their idea and relying upon the emergence standardized farms --- computing utilities --- to provide an on-demand substrate. This is the seed idea behind the Sun Grid $1/cpu-hr and $1/gig-mo offerings. These, however, are raw cycles with simple O/S-process and network filesystem abstractions. On top of which new deployment containers, on top of which new network applications will be constructed.

Aha! A \*new\* stack (and ecosystem around it) will emerge. I think it will loosely follow a diagram similar to this one:


The base layer is what I'll call the "Utility" comprising raw cpu-hours and gigabyte-months. These are the commodities of computational energy and remembering stuff.

The next layer are deployment "Containers". Today, we think of most grids as OS-specific process containers, but I'm betting this will rapidly evolve into more robust abstractions such as J2EE and Jini. Similarly, storage containers will comprise things like relational databases and fixed content stores.

The next layer is likely to have its own internal structure, but I'll lump it all together as "Application Services". It's the thousands of services that will be created around things like search, email, CRM, gaming, ERP, and on and on.

Finally, I'll bet that this will give rise to whole new "Application Networks" that aggregate and stitch together the network applications to create the whole fabric of completely re-factored enterprise IT, or perhaps new styles of businesses entirely. Somebody is going to make a lot of money on this layer.

It's a much longer discussion about how this new stack will change the balance of power in the computing industry. Just suffice to say what's important will change and a whole bunch of stuff will follow.

Which brings me back to my good friends in academia. I'm looking to you help navigate this world, and to really understand what will be important, and to understand where innovation can and should take place. To start, we are going to make available a million cpu-hrs and hundred terabyte-months.

Gratis.

And honestly, just a few years ago, I thought computing was getting boring. And now I can't wait to witness the industry morph over this decade.

Wednesday Feb 09, 2005

The Community

Let me just say that I am very much enjoying the discussion and debate around what we are trying to accomplish with OpenSolaris and CDDL. But one thing that I hope we can avoid is rhetoric that wraps itself in the flag of "The Community". I think we should watch out for such sweeping generalizations for many reasons.

First, and foremost, while we all are members of the greater community of developers, I think it's fair to say that we are all also independent and free-thinking individuals who choose open source development --- and development licenses --- for a whole bunch of independent and free-thinking reasons. To "speak for The Community", seems to me,not only more than a little hubristic, it is fundamentally disenfranchising.

Invoking "The Community" (and speaking for it) has a Cathedral-like resonance. Let's remember that we are all at the Bazaar --- shout like crazy, but don't try to speak for us all.

A footnote on CDDL and GPL code co-mingling. The issue of license compatibility with co-mingled GPL and CDDL code is an issue with GPL license, not with CDDL. Indeed, CDDL will let you intermix modules with anything you want (including proprietary ones). GPL won't let you intermix with anything else. This is not an argument with GPL; I completely understand the intent, and it certainly is a powerful effect. Perhaps there is some room for innovation here the FSF develops GPL3. I'd love to engage in that discussion.

Monday Feb 07, 2005

My Views on Open Source

There aren't many things in our industry that are more transformative than open source software, so I think it's a great topic to launch my first "official" blog. And what a better time than now since we announced releasing Solaris in open source form under the new OpenSolaris community www.opensolaris.org. (Thanks go to my good friend and colleague, Jonathan Schwartz, for his relentless teasing about my reticence to blog --- more on this later.)

Open software is fundamentally about developer freedom. We want developers to freely use any of the OpenSolaris code that we developed for their purposes without any fear of IP infringement of Sun: either patent or copyright.

We chose a license -- CDDL , an improvement of MPL -- that clearly and explicitly gives that freedom.

In fact, the license is MORE liberal in its IP license than even GPL , because it gives a clear patent license and doesn't demand the same viral propagation. Yes, I know that's a view divergent from many who believe GPL \*is\* open source, but I happen to believe choice and freedom go hand in hand.

Complimentary to developer freedom, are developer rights. A code developer (individuals or corporations) do have rights to the code they developed. It is, after all, the fruit of their labor. By choosing to place that code under an open source license, a developer surrenders some of those rights to the community in the hopes of a beneficial exchange. NO open source license surrenders ALL rights. The way you do that is to place code in the public domain.

Just to level-set. I'm a huge supporter of the free software movement, including the FSF. I knew Richard Stallman at MIT in the early '80s when he was ramping up the GNU project, and vividly recall the relentless yet methodical way he was going after the entire stack: start with the tools (gemacs, gcc,...), build out the supporting libraries and utilities, and then go for the kernel. As I recall, Richard went for the kernel slowly because he was pretty concerned about AT&T and I think wanted to be squeaky clean IP-wise, which as a result reflected a lot of care taken in the GNU project so far. Well, Torvalds and friends just went for it, and built out the kernel. Linux was most definitely built on the back of Stallman's hard labor. It's GNU Linux, IMHO.

At the center of Stallman's genius is the GPL. When compared to BSD, which was essentially a simple copyright grant, GPL was more explicit about IP grants (with a clever sort of mutually assured destruction if you were to sue), and, of course, the duty to re-publish any changes or improvements that got made to the code.

It's all about community building. But it's not a "free beer" license. If you don't play by the rules, then you neither are afforded the IP protection nor are you entitled to copy the code.

A pretty tough provision is the viral aspect that takes away the developer's freedom for independently developed code incorporating GPL code by requiring that code to be GPL'd.(Future rant: a lot of GPL users kick these chalk lines pretty hard by having loadable modules that aren't GPL...)

And just to be clear. You CAN'T take code under GPL that you haven't written and place it under another license. And you DON'T get a patent grant for any of the ideas expressed in that code that you choose to re-code under a different license. Even so, by putting something under GPL, you still have a lot to say about what can and can't be done with it.

So why other licenses like MPL, Apache, and now CDDL? Only because GPL is often considered TOO restrictive in its terms. A key difference with MPL is that it removes the viral requirement, and permits code of different licenses to be co-mingled (obviously assuming that other licenses permit the co-mingling as well).

This co-mingling turns out to be really key for situations when you don't control all of the modules in your system, or when you want to work off of an open base, but do some of your extensions. And, yes, that does go against some of everything-should-be-free philosophy implied by the GPL, but it helps strike a balance between a whole set of competing interests.

To be perfectly clear, we are releasing a huge amount of Solaris code under CDDL. That means you can take any or all of the modules, and if you respect the basic license terms of (1) propagating it, and (2) making public any improvements or bug fixes, you can do with it as you please. Embed it any product. Build your own custom distributions. Intermix with any other code you wish (assuming that code lets you do it). You can do any of that and you get a grant to any patents we might have covering our code. That's an explicit part of the license.

What have we done? We have given away enormous intellectual property rights (the code and about 1600 patents that might read upon it) to any developer who wishes to use our code. The only thing we ask in exchange --- as is the only thing that Stallman and Torvalds and every other open source developer have asked in exchange --- is that you honor the license. Period.

Have at it!

About

Gregp

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today