Consolidation - The importance of a cost meta-model\*

{short description of image}All the cost management methodologies start with either a detailed chart of accounts, or a meta-model.

The Sun Blueprint "Consolidation in the Data Center"1 by David Hornby and Ken Pepple document this fact and examine several models. Once an acceptable meta model is defined, detailed fact finding can be undertaken to understand costs and how they are allocated and consumed by the systems and the supporting infrastructure. The diagram aside is the meta-model used by the Sun's UK Data Centre Practice. It is closely based upon the model developed by Sun over the last two years and documented in Nigel Hawkes presentation "Consolidation : From Soup to Nuts". One of the key differences between this model (see aside) and it predecessor is that this answers the "Barcap" question in that few infrastructure departments hold complexity or quality budgets, neither are they chartered to recover business revenues which their infrastructure helps to earn.

I am convinced that complexity costs money but in most cases, architectural complexity reflects itself in budgets by high levels of expense on other cost contributors. Unless the provider is a contracted ASP and holds a budget for SLA failure penalties, slow or unavailable systems do not cost money to the IT department. However if services are over provided, then a reduction in the service may allow IT organisations to reduce their cost, just as under provision may be a justification for investment in infrastructure, but both "Architectural Complexity" and "Service Quality" are to my mind design or service constraints on the IT infrastructure. The "Application Delivery" cost category (or budget) is usually held by a different department; most consulting I have undertaken has been with infrastructure provider organisations or departments.

In the model above, the horizontal bars are the cost contributors that make up the IT organisation's "Chart of Accounts". Some of these are self-explanatory but others require some thought and even customisation. One example is whether operating system or middleware software product should be accounted for as hardware or software maintenance. (The answer is that it depends upon the particular supply contract.) Apart from this example the bottom two layers are fairly self explanatory, but the scoping of the systems under measurement raises an interesting problem. This is whether an asset has been directly purchased and thus represented as a piece of hardware or software, or is indirectly required and should be represented as a People cost( because the assets are a personnel provisioning investment, such as a phone or desktop system) or IT management (such as a Help Desk database server). Some people costs may be best represented as a cost item under IT management if that is what the people do.

A further development of the model, is almost the division of resources into applications hosts which are "fee earners" for the IT organisation and others which are provisioning or support functions. The latter system can then be factored onto the "fee earners" as cost. This might mean that the licencing and support for applications enabling systems and software would be accounted for in the "Aquisition & Maintenance Costs" categories. Management software and the server systems used to support service delivery & support are seen as part of IT management. These customisations are perfectly legitimate and in many cases required to get an understanding of the problem. Its the requirement to customise which gives the meta-model approach its flexibility and popularity.

Each cost contributor that is deemed in-scope needs to be examined to determine how much as spent, and what the "financial scalability rules" are. This is a way of expressing the fixed and variable cost factors as a function of the size of the estate. (If not the size, then some valid policy instrument, since we propose to manipulate the reality represented by the parameter variable and compare the before and after states to see if a better solution is available, and if the project to transition to this desired state is financially viable.).

It is my view that with thought, most IT shops can achieve "More for Less" and take advantage of the ongoing changes in price performance in system capability offered by the IT vendors.


\*    The meta-model (or at the least its diagrammatic representation) is copyrighted. The predecessor model was published in ComputaCenter's White Paper "Consolidating the Server Infrastructure"

1    "Consolidation in the Data Center" by David Hornby and Ken Pepple227 pages ISBN 0-13-045495-8 Published September 19, 2002 - this is a buy it page but the book description is easier to find than on the Sun Blueprints Books page.

2    "Consolidating the Server Infrastructure", A Computacenter Whitepaper. V1.0 Published Nov 2002 - It looks like they've updated their White Paper and the meta-model's dropped out.

Comments:

Post a Comment:
Comments are closed for this entry.
About

DaveLevy

Search

Archives
« May 2015
MonTueWedThuFriSatSun
    
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
       
Today