Monday Nov 09, 2009

"... Get the Client to Not Use Corporate IT ..."

In one of the talks at tonight's San Francisco Drupal User's Group meeting David from Razortooth Communications gave some interesting advice. The audience for the 20 minute site showcase probably did not find anything Earth shattering in the presentation, but considering the focus on Razortooth's client, PARC, a long reigning Silicon Valley tech establishment, I found the advice illuminating...
"TRY TO GET THE CLIENT TO NOT USE CORPORATE IT HOSTING"
PARC's IT department, like those at many mature technology driven enterprises, does not readily abdicate IT control to a band of developers with a new idea. Just last week I was reminded of this when I presented on the topic of cloud computing to a group of IT managers at another large tech company. When I asked if they saw the relationship between developers and IT changing, one manager said simply, "They have to get their IT resources through us." His colleague then quipped, "and they wear knee pads". With attitudes in IT like this, it's not surprising that application developers are increasingly circumspect about the traditional model of centralized IT services. Precious time lost to the traditional IT cycle of procurement, receiving, and provisioning, and the constraints of working within a corporate standard type and size of infrastructure drives developers to avoid corporate IT. The innovators among them will find other ways to get the job done. What was once jerry rigged on (mis)appropriated corporate assets is giving way to another more efficient form of ad hoc infrastructure: cloud computing services. It's clear to anyone developing in open source communities like Drupal, that traditional IT is becoming less relevant by the day. What is less clear is whether corporate IT departments can survive the shift toward self service, pay-as-you-go infrastructure by emulating those traits within their own data centers, or if they'll further alienate developers with draconian measures or declaring virtualization to be a satisfactory compromise that ultimately does less to enable innovation and more to perpetuate the status quo.

Monday Aug 17, 2009

OpenSolaris Server !Desktop - How to Minimize OpenSolaris

OpenSolaris is widely know as a state of the art and feature rich server operating system. So, it's a little surprising to many users trying it for the first time when they find that it is also a full fledged desktop environment.

But what if you don't want all that Gnome-Firefox-VNC-Xorg goodness in your base installation?

Well, you could pick through all the packages on the system looking for the desktop tools, examine them for dependencies, and build a manifest for removal. Or, if you're on OpenSolaris 200906, then you could just use Glenn's minimization script that comes in the samples with Immutable Service Containers (ISC) Construction Kit for OpenSolaris. To get it, grab a copy of the ISC project:

	$ hg clone https://kenai.com/hg/isc~source  isc
Then run the script:
	$ pfexec isc/opt/samples/minimization.ksh
The script will remove 237 packages and disable 11 services that are non-essential to running OpenSolaris strictly as a server. The project page on Kenai gives a little more background on the minimization script. Note: this minimization is separate from the hardening that the core ISC installation will perform on the systems. While you're at it, you might also try ISC itself, or install the pre-configured OpenSolaris ISC image, which is available in OVF format. Instructions are on the Immutable Service Containers Construction Kit page.

Monday May 25, 2009

Voila! DrupalCon Paris Set for Sep 1-5

Registration for DrupalCon Paris opened this weekend, and already several sponsors have claimed coveted top sponsorship spots for the September 1-5, 2009. Whether or not Sun will sponsor for the 5th time in a row is something I'll be checking into over the coming weeks.

Although the agenda and session schedule for DrupalCon Paris has not yet been posted, there is already some interesting content planned for the event, including full day immersion style training in Commercial Training tracks. Whether or not Sun sponsors this DrupalCon, I plan to make this one my 5th in a row.

In other DrupalCon news, San Francisco is making a bid for North America's DrupalCon in March 2010.

Tuesday Mar 24, 2009

The Sustainability Challenge: Can the Internet Help?

Today's Green:Net event in San Francisco's Presidio District brought together some of the most informed and influential people working at the intersection of sustainable business and high tech. It was good to see this cross section so enthusiastically pursuing opportunities for improving the environment and their bottom lines while most of the rest of the economy is struggling to stay afloat. I was proud to represent Sun, which was one of the three event sponsors.

No Consensus Here

Kudos to the Green:Net organizers for hosting a wide spectrum of perspectives in the keynotes and panels of this one day conference. The poles of the spectrum were aptly represented by Saul Griffith and Bob Metcalfe. Both of these engaging speakers are big heros of mine.

Both have expansive knowledge relevant to dealing with society's energy dependence. But they differ on the approach to meeting the challenge. I think Griffith would agree with Metcalfe when he said, "Our goal is not to darken the Earth," but he would differ with the Internet tycoon on priorities. Metcalfe, true to his venture capitalist roots, believes the real challenge is to feed the every increasing demand for energy, and he cites the evolution of the Internet as a great reference case for meeting that challenge. Griffith's perspective, on the other hand, places the priority squarely on conservation. I have to side with Griffith, with all due deference to the man behind the Law of network effects, if for no other reason that it represents a bigger shift for society and implicitly acknowledges that there is much that is broken in our current consumption orientation. In an era where some mainstream scientists are projecting the decimation of human populations, knocking our species down from six billion to one billion by the end of the century due the effects of climate change, it's time to consider an emergency exit strategy. Griffith gave his emergency exit strategy in a condensed version of the one hour talk he gave to the Long Now in January, and still managed, in 20 minutes, to convince many of the 200+ attendees that the situation is dire and, crucially, that we (that's the royal We,) have a shot (about a 1 in 3 shot,) of keeping CO2 below the 450ppm threshold where scientist agree we might still avoid irreversible feedback effects. Griffith and Metcalfe effectively sanctioned the rules for the Green:Net conversation: consensus is not required to participate; differing opinions are valued.

Open_Source is a Verb

One of the highlights of the event was the Sun Workshop titled, Open Sourcing The Sustainability Challenge: Technology for Social Good. The panel discussion was Moderated by Josie Garthwaite with Sun's own Lori Duval, Natural Logic's Gil Friend, EQ2's Steve Burt, and AMEE's Gavin Starks providing the subject matter expertise. Data was the focal point of this discussion with lots of emphasis on accuracy and integrity being critical success factors for measuring GHG inventories and environmental effects. It was clear that these leaders appreciate the magnitude of the challenge, yet they do not shrink from it. It was also clear from this workshop that the companies and NGO's and governments building these data collection and analysis systems are going to need a lot of computing infrastructure and specialized information systems knowledge to manage the Big Data sets required for accurate and meaningful analysis. Openeco.org was cited in the discussion as a good example of an open model for collecting, protecting and comparing data. However, Openeco.org, and the range of the carbon calculators and GHG inventory tools on the web do not fill the need for a comprehensive analytics system needed by decision makers who deal with GHG emissions policy and standards. Bigger, more sophisticated and integrated systems are needed here.

Can the Internet Help?

Another highlight for me was the Panel titled The Green Web Effect, which dealt with the use of web technologies to create a successful call to action in the green business movement. Much of the highly evolved thinking about the network effects of the Internet that is typified by Clay Shirky's work was abundant in this panel consisting of Moderator Alexis Madrigal, Erin Carlson, Director of Yahoo! for Good, Ron Dembo, Founder and CEO of Zerofootprint, Jason Karas, Founder and President of Carbonrally.com, Kevin Marks, Developer Advocate for OpenSocial at Google, and Dara O’Rourke, CEO of GoodGuide. Alexis opened the discussion by saying, "We didn't necessarily start the fire, but we're trying to help put it out." Framed in that context, he then asked the panelists, "What do you want to have happen out in the world?" I liked O'Rourke's answer the best, because it aligns with a vision I've had for many years. He said, "We want to help people to make better decisions by providing the right information at the right moment." He was speaking of the GoodGuide service, which helps consumers understand the environmental, social, and health impacts of their purchases at the decision point in a retail environment. I've just started using their new iPhone app and am pretty impressed with what they've accomplished in less than a year as a business. I am very excited about the potential for this service when they add barcode scanning capability. I'd also really like to see them add some reputation enabled crowdsourcing capabilities. I'd also really like to help them build that.

At the end of the day, I think it was pretty clear to everyone at the event that open networked systems for dealing with global warming, and energy efficiency are not only possible, but imperative if we are going to have any chance of avoiding catastrophic consequences of climate change. The event was another reminder of why I'm proud to work at Sun.

Tuesday Mar 17, 2009

Mega Data Center: Seeing is Believing

Over the past few months I had heard exclamations of amazement regarding a storied new data center in the Nevada desert called SuperNAP. I was a bit skeptical of the superlatives about scale and efficiency that embellished these stories. My skepticism turned to exuberance last week when I joined a group of architects from Sun for a tour.  The goal of our tour of this Mega Data Center was to see first hand the state of the art as implemented by Switch Communications, where Sun operates it's cloud computing business.

Switch, and it's customers, which include several operating units within Sun, are beneficiaries of the collapse of Enron. The former utility giant had designs on trading network bandwidth using models similar to their energy trading systems. When Enron's flimsy financial structure gave way, their financial backers and the U.S. government stepped in to auction these assets. Switch CEO Rob Roy was the only one that showed up at the auction block.  In an uncanny twist of fate, he managed to side step what could have been a formidable bidding war to control this hub of communication that is unparalleled in North America.

Here are the vital stats that only begin to describe the phenomenal facility that Switch has managed to assemble:
  • 407K square feet of data center floor space
  • 100 Mw of power provisioned from two separate power grids
  • Fully redundant power to every rack, backed by N+2 power distribution across the facility
  • Enough cooling and power density to run at 1,500 watts per square foot (that's 10x the industry average 150 watts).
  • 27 National network carries
This describes the capacity of the SuperNAP, which is just one of the eight facilities operated by Switch within a 6-7 mile radius in a no-fly zone south of Las Vegas.

Sun to Reveal Cloud Plans Tomorrow

Some details of Sun's Cloud Computing business will be revealed tomorrow (March 18) in New York at the CommunityOne East event.


 Additional Resources

Tuesday Feb 24, 2009

How does Sun work with Drupal?

Organizers of DrupalCon DC,  in the lead up to the event, asked Sun and the other event sponsors a few questions about their relationship with Drupal.  The first question was:

How does Sun work with Drupal?

I provided this answer:

There are too many ways in which Sun works with Drupal to list them all here, but some of the highlights are:

Clearly, Sun is deeply connected to the Drupal community in many ways.

All the Q&A from DrupalCon's sponsors will be posted here on the DrupalConDC site.


\* Entry corrected to say Sun has contributed more FLOSS code than any other single institution, not Sun has contributed more code to the Linux kernel than any other single institution (although I think I did read that somewhere, it's not substantiated in this paper).  Thanks to Matt for point that out - it's major difference.


Tuesday Feb 17, 2009

Rural Rwanda's Budding Healthcare System

Ordinarily, my Inbox is full of anything but heartwarming emails, but last week I was gratified to receive a photo of four Sun Rays atop health worker desks in Butaro, Rwanda. The photo, sent by Erik Josephson of the Clinton Foundation's HIV/AIDS & Malaria initiative, shows the pilot setup for rural health clinics that former president Bill Clinton envisioned when he made his TED 2007 wish to ...
"... build a sustainable, high quality rural health system for the whole country."
- Bill Clinton, March 2007

Sun provided these Sun Ray 2's and the supporting servers as part of its commitment to support the TED Prize that year. To actually see the Sun gear in situ makes all the planning and logistics and weekly conference calls over the past 18 months suddenly all worthwhile. The site in the photo is one of two pilot locations in which the infrastructure will be tested in live clinical situations over the next month. Pending the results of the pilot, this model infrastructure will be rolled out to an additional 70+ clinics and hospitals across rural Rwanda.

The pilot phase of the project, currently being administered by Partners in Health and The Clinton Foundation is set to begin in the villages of Butaro and Kinoni on Monday. The systems infrastructure, comprised of the Sun Ray 2, Sun Fire X2100 server, and Solaris OS, were selected by the project steering committee to serve up the healthcare worker desktop environment. The selection criteria reflected the goals of the project as well as the relatively austere conditions where the healthcare facilities are located:

  • Electricity is scarce and not terribly stable in rural Rwanda, so the Sun Ray 2, which consumes about 4 watts and is an entirely stateless device, is a good fit for the workstation. Attached to each Sun Ray is a low power 15" display which brings the total power consumed by each workstation to less than 25 watts. On the server end, the X2100 is the lowest power server available from Sun. The total electricity demand for the primary IT infrastructure in a typical clinic - 7 workstations (Sun Ray and display), 1 server, 1 network switch - is less than 500 watts.
  • These facilities do not have extensive protection from the heat and dust that are common in Rwanda's rural villages, so reliable systems that will hold up to extremes is important. The Sun Fire X2100 is a reliable workhorse with good serviceability. Combine that with the Sun Ray's zero moving parts (except keyboard and mouse) and you have about the most reliable setup possible. Every clinic and hospital will inventory one spare Sun Ray, so if one does fail it's a simple replacement to put that workstation back into service - no installation or configuration required. Just attach it to the network and you're back to treating patients. Spare X2100 servers will be inventoried in Kigale, so a server failure will require that the replacement be dispatched to the facility for replacement.
  • Rwanda is a fledgling economy. The ICT infrastructure upon which the healthcare system and other critical social services are built must be sustainable and low cost. Any dependence on proprietary commercial products would effectively impose a tax on growth and leave Rwanda's infrastructure at the mercy of foreign commercial enterprises. So, wherever possible, free and open source products were chosen. The Solaris operating system, the Gnome desktop environment, and the Open Office productivity tools fit the bill, and nicely complement the medical records software to be used in these clinics, OpenMRS.

OpenMRS and Africa's Health Workforce

OpenMRS is an open source application, written in Java, that was conceived by Paul Biondich at the Regenstrief Institute. It is designed expressly to address the need for electronic medical record keeping in the developing world, but also to serve as a framework for building generalized medical informatics systems. The Rwandan government selected OpenMRS as a key component of their healthcare scale up effort.

A few countries in sub-Saharan Africa, Rwanda among them, have set a long range vision for national economic and social advancement. Vision 2020 is Rwanda's development strategy to achieve or even surpass developed world standards for national government, rule of law, education and human resources, infrastructure, entrepreneurship, and agriculture. OpenMRS and the supporting open source and energy efficient technologies from Sun contribute toward the infrastructure as well as the human resource goals of the plan. These tools will help to expand the pool of workers capable of delivering critical forms of healthcare by providing a standard protocol and reference resources to community members and paraprofessionals employed in health worker roles, thereby alleviating dependence on highly trained medical practitioners for routine diagnosis and procedures. Estimates from the development community and the UN indicate it would take more that 20 years for sub-Saharan countries to reach the 2.5 health workers per 1,000 people ratio to be consistent with UN targets, assuming they even had sufficient training capacity to matriculate that many doctors and nurses. Instead, OpenMRS helps to change the equation and make it possible to expand health care services much faster than would be possible in a traditional public health model.

In essence, the four workstations in the photo represent a new model health system, not only for Rwanda, but potentially for many other countries in the developing world.


Related Reading:

Sunday Feb 08, 2009

Help Yourself to Some OpenSolaris on EC2

In the maelstrom of preoccupations that kept me awake last night, self-service in the cloud was a strangely prominent theme. A sad commentary on my slumber time, I know, but it was eerily coincident with news of OpenSolaris freed from a special registration process - when I woke this morning I found this announcement in my Inbox:

News Flash for Our OpenSolaris 2008.11 on Amazon EC2 Users!

We are happy to inform you that the latest OpenSolaris 2008.11 Base AMIs on Amazon EC2 in the US and Europe are now available to you and your users with no registration required! Please stay tuned for more OpenSolaris 2008.11 AMI stacks coming soon for you to quickly access. The registration process for pre-OpenSolaris 2008.11 AMIs is still in effect.

For your reference, here are the AMI IDs:
OpenSolaris 2008.11 (US) 32-bit AMI: ami-7db75014
OpenSolaris 2008.11 (Europe) 32-bit AMI: ami-6c1c3418

To read about what's new in OpenSolaris 2008.11, please visit the OpenSolaris Web site.
OpenSolaris on EC2 had been available for months, but it was cloistered behind a registration process that involved waiting for a human to get back to you with approval of your request. But no more. Now OpenSolaris on EC2 is a first class citizen with all the other \*nix and Windows distros, available self-service to anyone with an AWS account.

Sunday Jan 11, 2009

Expect More Innovation in Cloud Tools

Amazon.com released on Thursday a web GUI for AWS: the AWS Management Console.

It's pretty slick, although still clearly a beta - some rough edges around navigation, and EC2 support only (no S3, SQS, Cloudfront, etc. yet). If you're running lots of diverse AMI's, this single view is a great decision making tool. Once they add Tagging (Label and group Amazon EC2 resources with your own custom metadata,) companies will be able to quickly see opportunities for optimization and grouping of operations, etc.

The AWS announcement probably hurts RightScale, but this is their positive spin on it.

This news from Amazon raises the bar on ease of use and the relative importance of self service in the cloud market. Once they've experienced the AWS Management Console or RightScale's dashboard, many enterprises will want their own private clouds to be built with clean UI's and web2.0 ease of use too. While a quality programmatic interface is vital to the scaling needs of cloud users, a simple and useful set of GUI controls is equally important for those primarily seeking the self service benefits of cloud computing.

Entering the market without a comparable console will be a disadvantage for upstart public clouds, but this new prerequisite for clouds also creates an opportunity to up the ante further.

A couple opportunities for value add come to mind:

  1. Social networking integration - one clear opportunity is to enhance cloud console functionality with existing social networks. Imagine a 37Signals interface that let's you plot the sequence of operations required to upgrade your complex app running across 1000 instances in Basecamp, a message to Twitter followers when specific operations complete, and a Livejournal post summarizing the status of the upgrade after completion - a social RESTful SOA for datacenter operations if you will.
  2. Modeling and Design tools - I expect companies like Smugmug wont use the AWS Dashboard and Control Panel features as is, but would use a GUI that could help model different deployment patterns and quickly sort through sequencing and dependency issues and compare performance characteristics of alternative architectures. (If you haven't read how Smugmug uses EC2 for their Skynet, check out Don MacAskill's post on it. A modeling tool might give Don a way to compare an SQS implementation with his home grown solution, and make a decision informed with real financial and performance inputs.)

Other Cloud Computing news for the week ending 10-Jan-09:

Monday Dec 15, 2008

An Evolving Maturity Model for Clouds

In a post on his Wisdom of Clouds blog last week, James Urquhart proposed five phases of Cloud Maturity.

I fully concur with the phases of this maturity model, and with Urquhart's assessment of the current state of enterprise IT on the scale, i.e., most have consolidated, some have abstracted, fewer have automated, and only a handful are experimenting with metering and self service. None have achieved open Internet-based Market capability, yet.

This maturity model is useful, but I can't say I find the first four phase names, order, or definitions to be novel in any way - I've been writing these exact same phases on client whiteboards for about three years.

Maturity Model (MM) Hopping Disallowed

The fifth phase (Market) however, is new and insightful, and it reshapes the preceding four in interesting ways. In other words, if you're on a path to reach Market maturity then certain capabilities must be addressed in preceding phases that weren't necessarily required in preceding models that stopped at Utility. For example, elements of service level management must be addressed in the Automation and Utility phases that were not essential prior to the proposed model. In a pre-Market maturity model enterprise IT could deliver automatic provisioning and pay-for-use to their customer without demonstrating compliance to specific service levels. That won't fly in a price arbitraged cloud Market, so these capabilities important to the Market phase must be built in the preceding phases that correspond to the capability. Maturity models are only useful if the phases inherit all the preceding phase capabilities. If additional Automation capabilities are required to achieve Market capability, then Automation was not really achieved at phase three.

What of Elasticity?

I'm not convinced that this is a comprehensive maturity model, and whether we can fit clouds, both public and private, into one vector such as this. For instance, where does Elasticity fit? Auto-scaling relies on Automation, but would we require it of any environment claiming to be Automated? The pay-for-use implication of Utility does not necessarily mean resources are acquired and released in conjunction with use - metering is not intrinsic to provisioning, and vice versa. Whereas Elasticity, I submit, implies growing and shrinking resources synchronously with customer demand. So, does Elasticity warrant it's own phase in the Cloud Maturity Model? What are the implications of this model for private clouds? Does a private cloud ever reach Market phase maturity?

MM Drafted, Now Where's the Magic Quadrant?

In any case, the proposal of a Cloud Maturity Model is a valuable step in the evolution of cloud computing, and the Market apex of the model seems like a reasonable goal. And there is an army of consultants forming to help enterprises address the climb.

Saturday Dec 06, 2008

The Promise of Crowdsourcing Gets Brighter

Today I came across the work of Mike Krieger and Yan Yan Wang at Stanford's HCI Lab in which they studied the efficacy of certain online brainstorming techniques.  Their research focused on comparison of idea generation tools to see if, through tool adaptations, it was possible to increase participation in expanding and improving ideas while overcoming the problems of too many disparate ideas, not enough idea collaboration that are typical of brainstorming on discussion forums.  This very narrow (but valuable) study of brainstorming shows potential for significant improvement in modes of collaboration, so I hope they continue their probe into this area.

Krieger's research into idea generation revealed some great lessons from crowdsourcing endeavors on the Internet which he has shared in these slides posted on Slideshare.

The most valuable and, I think, uniquely insightful advice he gives are the 9 guidelines for when to apply crowdsourcing:

  1. When diversity matters
  2. Small chunks/ delegate-able actions
  3. Easy verification
  4. Fun activity, or hidden ambition
  5. Better than computers at performing a task
  6. Learn from hacks, mods, re-use from crowd
  7. Enable novel knowledge discovery 
  8. Maintain vision & design consistency
  9. Not just about lower costs
I see these tips as not only useful to the aspiring wiki-ist and crowdsourcer, but also to users of Mechanical Turk, which happens to be a tool that was instrumental in Krieger and Wang's Ideas2Ideas study.  I'm looking forward to applying these guidelines and the design principles behind Crowds and Creativity to future crowd sourcing endeavors.

Wednesday Dec 03, 2008

And the Spoils Go to the First Vendor Supporting IaaS Standards

Ian Kallen over at Technorati wrote a nice post about the cloud computing ontology and the subtleties of Infrastructure as a Service (IaaS).  I'm glad to know he's still working on the hard problems there at the blogsphere search engine after their recent cost cutting measures.  As he has said to me previously, he writes, "What I foresee is that the first vendor to embrace and commoditize a standard interface for infrastructure management changes the game."  I think he's right, particularly in his prediction that these standards will enable a market place in which workloads can be moved from cloud to cloud according to price, capacity, and feature criteria.  A few companies are jockeying for the pole position in the race to provide the arbitrage for this meta cloud that Ian envisions.  Rightscale is perhaps in the best spot for that right now.  But who's going to set the standards for interfacing with clouds.  It's still pretty early in the game, but there's no question that Amazon has a good leg up with the AWS API's, which are further butressed by Eucalytus's emulation of those interfaces in their open source Xen based IaaS stack.  Meanwhile, Ruv over at Enomaly is fostering a Unified Cloud Interface (UCI) standard to be submitted to the IETF next year.  Conspicuously, it appears that Amazon is involved in neither the Eucalyptus nor UCI standards efforts.  Meanwhile, Rightscale is working closely with Rich Wolski's Eucalytus team, and both of these standard bearers are advising on Sun's Network.com model.  It will be interesting to witness the evolution of agreed upon standard interfaces in the presence of the defacto standard that is AWS.  Until there's a cleaner and/or cheaper way to develop on OpenSolaris in the cloud, I'll continue to write to the AWS interfaces to launch and extend instances of OpenSolaris on EC2.

Thursday Oct 30, 2008

It Don't Take a Weatherman to Know Which Way the Wind Blows - Except Inside the Enterprise

With California's first real rain of the season forecast for Friday, it's time to take stock of another weather system affecting the West (and other places connected to the Internet): cloud computing. Summer 2008 saw a downpour of cloud offerings. We've witnessed whole business ventures billow up and evaporate on the cost and agility promises of cloud computing. While storm systems continue to build off the Pacific coast, the long range forecast is for an unstable system to dump on the landscape for a couple quarters before a high pressure cell clears the air. Despite the instability (and, as if this hackneyed weather metaphor needed more abuse,) it don't take a weatherman to know which way the wind blows - adoption of cloud computing will continue to rise.

The storm of demand is fed by startups

In the climate of "fail fast" startups, the appeal of cloud computing as a means of containing cost and improving productivity during the fragile stages of germination is obvious: skip over the infrastructure "muck" and keep your costs tied to your growth.  "Fixed costs are taboo" is the principle directive from many VC's investing in Web startups - put the employees on a sustenance + equity compensation plan, and, for God's sake, don't spend anything on compute infrastructure you don't absolutely need. 

A major front accumulates in the enterprise

But what about the enterprise?   Enterprises differ from startups in how they evaluate risk and how they spend on IT services.  In the enterprise computing landscape, risk averse business leaders are concerned with reliability and control over their services and their data.  Control is not one of the attributes primarily associated with cloud computing, security risks are a major barrier to enterprise adoption, and 99.9% availability is often not good enough for business critical and mission critical services.  Further, and for the time being, fixed costs are already baked into the equation in most IT business models.  In fact, most large enterprises treat IT as one big fixed cost, which it parcels out to business units according to some "fair share" cost allocation scheme.  

Rarely are the business units of a large enterprise satisfied with their cost allocation, let alone the IT services it pays for, but they're captives of myriad barriers like technical complexity, regulatory compliance, data provenance, spending constraints, and limited organizational imagination. One or more of these factors are impediments to any serious consideration of public cloud computing for existing enterprise IT needs.  Business consumers of enterprise IT would like to have a secure, reliable, pay-as-you-go public utility service customized to their unique needs, but such a service does not exist. They'd use a public cloud for the cost and agility benefits if they perceived the risks to be acceptable, if their complex needs could be managed, and if they weren't already paying for IT services with funny money. Public cloud service providers are working on the availability concerns by committing SLA's, and certain security concerns by providing VPN's, but the reality is that the major refactoring of their huge software investments required in order to work in the public cloud will drive many enterprises to build their own cloud-like private infrastructure instead. In fact, any large enterprise is probably already doing this - the practice of building cloud-like infrastructure has been evolving for years under the cover of consolidation and virtualization initiatives.

High clouds are approaching

If predictions of mass consolidation onto public clouds proves true, then enterprise IT might be a dying breed of industrial infrastructure.  But just as it took electric power distribution decades to transition from local DC power generation to utility grids, traditional data center bound enterprise IT won't die easily.  Enterprises will strive for the  kind of efficiency that propels public cloud adoption by continuing investment in consoldiation and virtualization in their own data centers.  But consolidation and virtualization alone does not a cloud make, and will leave the consumers of enterprise IT with the same bucket of bits, still wanting for a cloud.  So when does one confer cloud status on a consolidated, virtualized environment?  The following simple criteria gives a pretty decent working definition:
  1. When it delivers IT resources as a metered service (rather than an asset or share of an asset,) and
  2. When all it's services can be accessed programatically.

Yes, the implication here is that cloudhood can be achieved in a private implementation. (This potentially violates certain tenuous claims that cloud services must be provided offsite and by a 3rd party, and that clouds are accessed over the Internet, but we'll not constrain the discussion with those seemingly arbitrary distinctions.)

Of course, the devil is in the details, so the next posts in this series will address more nuanced definitions of cloud computing. In particular, we'll examine the attributes of cloud computing as put forth by other aficionados, and what value and relevance these attributes have to business consumers of enterprise IT.


Related reading

Monday Jul 21, 2008

The Danger of Doing the Whole Proprietary Systems Thing All Over Again

I think it's time to dredge up Greg Papadopoulos insightful talk at Structure08 about the economics and fate of Cloud Computing. Afterall, it's a month old already, so is ripe for historic interpretation. With all of a month's worth of evolution and perspective, do you find his admonition segacious or mercenary? I'll ask again in a few months.

(Greg's talk is the eighth in menu of talks posted here.)

While this question of proprietary clouds lurked behind most of the Structure08 content, and it came to the surface whenever Joyent had something to say, it is Greg's talk that really throws down the gauntlet to our industry.

Are we going to fork over control again?  And what matters more for control of your data?  The ability to transfer it, or the ability to access it in the future?  Are there any companies, other than your own, to whom you would give the keys to your data?

Wednesday Apr 23, 2008

The Rise of Collaborative Culture

No sooner had we put the wrap on an April 9 Commonwealth Club panel interview on Collaborating for Change, than PBS announced a really cool collaborative project on Nova to design the "Car of the Future".   Both of these recent productions focus on the application of open source design to social and economic needs beyond software.  The promise of open source economics is popping up everywhere.  It must be something in the water, (or the atmosphere).   Network based open source design efforts have been written about before, and there's more than a few established non-software open source design projects, but they were hardly regarded as mainstream.  And open source as a business model has been a fringe enterprise.  But all that is changing.

The upcoming Nova special, and the Commonwealth Club interview (with Amy Novogratz, Kate Stohr, Maria Giudice, and myself (video courtesy fora.tv)) serve as proof points that this phenomena has exceeded meme status and is spilling over into the broader socioeconomic graph.   But we knew this was inevitable, right?  We just needed the right conditions for humanity's collaborative tendency to come out of the proprietary deep freeze.

The substrate upon which this new culture is rising pairs flexible licensing models a'la Creative Commons with accessible technology for building collaborative online communities a'la Drupal and WordpressYahoo!groups and PBWiki.  Among the catalysts for this reaction are frustration over obscene economic inequities around the world, abuses of people and planet for profit, and utter neglect by federal governments.  As was discussed here in the video interview about the Open Architecture Network, these frustrations can be overcome by collaborating for change on the net.

Need more proof of the trend toward an open source economy?  Just check with the folks at Open Everything.  They're tracking numerous open collaboratives, which are exogenous to  the software world, but infused with many of the same principles, practices and tools as open source software projects. 

One of the most prominent tools applied to these new collaboratives is Drupal, and we discuss it's role in the Open Architecture Network in the video (at :37:30, :46:30, and :51:00).

Ten years ago who would have imagined that:

Yet these, and plenty of other examples show that collaborative culture is on the rise.  Does this signal the next generation economy in which businesses profit less from market lockout and legal protection and more from direct value delivered in open markets?  Or does it lead to a more fundamental shift wherein socioeconomic prosperity derives less through commerce than through collaborations for which the primary incentive to contribute is sociocentric good? 

About

At the confluence of cloud computing, sustainability, and open source

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today