Having been an admin in IT, I understand the pressures to keep things running and don't mess anything up. There are two approaches to this. The first is to do nothing. If it works, don't fix it. This is a great idea but the problem is that as things age, it gets more difficult to keep operational. When I worked at a local university, we used an old desktop to handle the web site for the university. The desktop was an old Sun SPARC machine that had enough memory to run an Apache web server. On the positive side, it worked and the university home page was stable and reliable. On the negative side, we didn't have spare parts, new security patches, or an alternate platform to run the service on. We were at risk. The reason that we were at risk is that we opted to do nothing because it worked.
The second approach is to constantly upgrade systems. This tends to increase cost and cause instability due to change. Constantly changing systems requires time on the admins part and training for the end user. Upgrades also tend to bring in new features and functions that might or might not reduce time. For example, if we update the Apache web server and include PHP as a new option because it is now a standard template for all web servers, we need to be able to support developers who do not know how to use PHP and provide training on shared resources and development. Additional cost for the IT with no new revenue stream.
I mention a standard template in the second approach. I personally think that this is important. I found it very difficult to have 25 different implementations of the same product. When something broke or needed patched, it it took significantly more time to update the system. If all systems are the same, it just requires repeating the steps 25 times and not recreating everything from scratch.
A recent Gartner study looked at cost cutting opportunities for 2009 in IT. The assumptions presented were worth noting. Assuming that a company generates $1B in revenue, they typically spend around 5% on IT. This correlates to $50M in IT spend. The majority of these costs are support cost for hardware and software. 22% is associated directly with the data center. 5% with help desk, and 13% with desktops. Application support, 16%, and Application development, 20%, make up a large chunk of the operational cost.
Info Tech looked at how things can be changed. Travel and training were the low hanging fruit. Changes in staffing and consultants/contractors are easy to do but negatively impact revenue. Outsourcing and changing renegotiating contracts take longer and have significant impact on cost. Data center consolidation and process change also take longer but have less of an impact on cost.
One of the recommended ways of reducing cost is virtualization and consolidation. In the past people have purchased one server for an app and purchased the server to meet peak load times. If you purchase smaller systems and cluster them, you can scale performance as needed and use the idle resources for other things during non peak times. The management issue at this point becomes peak management and demand management. It does increase administration requirements but significantly reduces capital spend and hardware support cost. This is an easy analysis to do for larger companies that use relatively large server with 8 or more processors in a single system. An example of this is Oracle Education. We use RAC systems on the back end for a variety of classes and OracleVM to provision prepopulated classes. We were using snap shots and copying from system to system and dedicated database servers for classes. When we went to RAC and OracleVM, we reduced the hardware by a factor of 6 and increased CPU utilization from 7% to 73%. Revenue per server increased by 5x which means that each class becomes more profitable without substantive changes in the way that we do business.