Deploying software in modern data centers is a nightmare. There's no happy way to say it. I've known this for a while, but I had a meeting today that really drove the point home. Mike Wookey, Prasad Pai and I met the account team that manages one of Sun's largest financial services customers. The account team described process and tools the customer uses today to deploy their complex, multi-tier software. It's typical of what we see with large customers, but it's not pretty.
Enterprise software is complicated. Deploying it involved multiple levels of dependencies which must be managed. These include: hardware, operating system, middleware, applications, network configuration, etc, etc. All applications have these kinds of constraints, but there is no standard mechanism to describe these relationships. Tools vary widely in how they address this. This leads to complex procedures that are either: 1) error prone, or 2) time consuming and inflexible. Neither is ideal in an environment that is measured on agility, uptime and cost savings.
In talking to customers for the past two years, we've really started to understand what's at the root of these issues, and we've been able to design the xVM product portfolio with these in mind. There is no single, technological, silver bullet that will solve these problems. However, with cleaver application of technology in some new areas, we believe we can make a big dent in the problem.
People who read my blog know that I like to talk about xVM VirtualBox
. It is a cool product, and I use it almost everyday. However, I'm not in the key demographic for the product. The most important people who use products are Software Developers (cue Steve Ballmer
here). Almost all developers in modern enterprises are developing complex, multi-tier software. With VirtualBox, developers can create multiple virtual machines in software and construct a multi-tier dev/test environement without ever leaving the comfort and safety of their laptop. This changes a key part of the process.
Today, when a developer creates an Enterprise application it may be represented as a set of Java EAR or WAR files. However, in order to operate, these archive files require a complex recipe of hardware/software/network dependencies that must be re-created by the test team, and then re-created again by the deployment team. The definition of this environment should (ideally) include middleware versions, operating system versions (with patch levels), network connectivity requirements and many other variables. If any of these are not duplicated exactly the software may not function correctly.
Now, consider the scenario where the developer doesn't simple hand off a set of application files and a recipe. Instead the developer hands off a completely configured machine -- an exact copy of the one used by the developer to code and unit-test the software! This machine can be encapsulated inside a virtual hard disk file with a small amount of machine readible meta data that describes the parameters of the virtual system. This "machine" can then be dropped unchanged into the hardened staging and deployment environments. No longer would the deployment team now be responsible for scripting (even with the assistance of various tools) the construction (from bare metal!) of the application's environment. This could really change the game.