Dynamic Infrastructure Background
By stanford on Aug 04, 2008
I refer to Dynamic Infrastructure in quite a few of my posts, so I created this post with some background info as well as pointers to other sites with deeper content. I'll probably link to this post quite a bit in other posts, so hopefully it is useful.
What is Dynamic Infrastructure?
It has been a long running project to capture some of our largest and most advanced business (as opposed to scientific) customer's infrastructure and service usages patterns, and translate that into a general framework for automating lifecycle operations based on what we learned.
It has been a combination of requirements gathering, architecture and evangelization, as well as an exercise in prototyping and reference implementations. Currently we are close to completion of our 3rd generation reference implementation. The roadmap has approximately looked like:
Some of the first work was very tool specific to tackle basic provisioning functionality for OS, software, and network. As the initiative has evolved, we have been focused more on building a framework that provides generalized interfaces for provisioning, configuration management, monitoring, reporting, persistence, etc. The interfaces provide common terminology and dictate how adapters will interact with each other by channeling them through the framework.
The first cut of this framework is about to go open source. Hopefully the legalities and logistics will be completed by the end of August.
How Is It Different?
Well first of all, it is not a product based suite like a Tivoli or BMC. In a sense it is an open core that these and other products or tools can integrate with to bring a "best of breed" feel to the datacenter toolset. So, if you want to use VMWare for Windows provisioning, xVM Ops Center for Solaris provisioning, N1 Service Provisioning System for App Server and Web Server provisioning, Intelliden for network device provisioning, Atrium or N(i)2 for enterprise CMDB, OpenView for SNMP monitoring, ... you can. The only criteria is that they have (or can be wrapped) with an API and they implement one or more of the relevant adapters for Dynamic Infrastructure.
What is the Long Term Vision?
Think about it this way. You have extreme virtualization and a driving homogeneous force in infrastructure. This means you'll have a small number of tools to manage systems, OS, and virtualization, but a huge number of things to manage (think 1500 IP addresses per rack).
The purpose of all this virtualization is to create and personalize a business environment for your "customer". Above the OS and OS virtualization layer is where this personalization really becomes valuable, but the toolset is broader, and there are many more customizations that are changing more rapidly (think 4 different flavors of app server, 1000 different applications to go in them).
So, to provide high value to your customers in the new enterprise, you need to manage at least an order of magnitude greater number of scarce resources like IP addresses due to virtualization, and you need to take on their application deployment tools and lifecycle management so you can live up to their expectations.
The long term vision (at least mine) of Dynamic Infrastructure is to provide a single interaction point (Dynamic Infrastructure) and a few clearly defined roles (adapter developer, solution developer, operator, administrator), along with a comprehensive language for describing the workflows, events, and policies in order to provide extreme automation of IT while leveraging best of breed tools for any particular environment.
For more info, see the DI Wiki.