Deployment engineering has typically been defined as the build and release process as applied to software engineering. Next generation looking and very large scale environments (think Ebay, Google, and others) are increasingly applying these methods for infrastructure management. For these customers, infrastructure has moved “up the stack” to include the middleware platforms and “across the stack” to include more and more of networks, storage, and the tools and processes to make management possible.
This emerging deployment engineering creates (or derives) the policies to make “good” architecture, to control and monitor how the architecture is reacting, and provides the framework (use cases) for change. It is the generation of well-thought out patterns that are applied via a model (vs real-time component-based configuration done say at a keyboard or via a patch) to the infrastructure to create and change services. It's the “science” of deployment architecture, infrastructure design, and the integration process to make it happen.
Deploying an application to run optimally requires a systemic view. Constraints and limitations exist all around the deployment engineer (Is this the new system admin role?) These can be physical or logical, known or unknown. To reach an optimized nirvana (some call it IT as a service, software as a service, etc.) for application run state, all of these should be taken into account. This spans more than one organization within a company or within an IT provider like Sun, adding to the challenge.
At many “next gen” and redshift customers, the lines between infrastructure as a rigid concept with separate components for each major function is starting to blur. For others, they utilize infrastructure services as APIs – like Amazon EC3. Both of these emergent/emerging worlds require solid building blocks, APIs, and service abstractions.
In this emergent world, servers start to contain firewall and load balancing capabilities, the middleware platform is already installed as part of the container, the OS has become an increasingly critical but more lightweight resource manager, and storage is moving to a standardized network interface, not dedicated connections. The effect is systems administrators are either becoming admins of everything or (and perhaps more difficult) they must interface with the other admin groups to get things done. Alternatively, they are looking for the experts (server, security, storage, etc) to define the scope of change and push capabilities to others.
In many environments, the actual deployment process is being pushed out to two roles: the developer themselves using a framework, IDE, or automation (think network.com), and operations, focused on repeated updates or architected deployment topologies that are repeatable and well-known (read safe). In many unix shops, this was the job of the system admin in the past, among other tasks.