UNIX Consolidation - Top tips for right sizing
By DaveLevy on Sep 15, 2004
I've been Sun UK's Principal Consultant in our Data Centre Practice since Autumn 2003 and together with colleagues been talking to customers and creating a consolidation planning methodology. The consulting offerings available to customers in the UK are described here......
The methodology involves defining the problem area or scope, creating a detailed catalogue and describing the systems, system costs, the cost scalability rules, the system capabilities and utilisation. We then map the current system consumptions onto a future state architecture, costing the transition and then performing a traditional investment analysis. This involves understanding the expected costs of the future state solution, and designing to obtain the benefits of new systems which are smaller, cheaper, faster and more reliable. We have some key rules of thumb when designing a future state, these include
Always deploy full systems; the card cage and backplane work harder justifying their cost. i.e. the cost/CPU is higher on half empty systems. ( This is a true rule of thumb, and is more appropriate for smaller systems (1-12 way)).
Big systems (& hence domains) will deliver higher utilisation, and hence the acquisition costs to obtain the necessary capability will be lower; you pay for unused capability as well as used.
Using large systems as if they were multiple small systems is silly. Only buy huge systems if you need big domains (or are at the margin of requiring big domains and value the scalability &/or flexibility).
The premium payed in buying scalable systems can be seen as an option purchase for a bigger system than deployed. i.e. buy 48 CPUs and an option for 24 more.