At some point during the last decade, two-speed IT started gaining traction, and looked like the future of IT strategy for all companies. The need for it was undeniable: changing customer requirements and growing reliability on digitally-enabled processes increased the strain on IT infrastructures considerably, this no longer allowing for the slow responses legacy infrastructures returned.
Two-speed response was simple: let us keep legacy infrastructure running at its pace, and let us start creating a new IT along with the former, that will give us the faster response today’s business demands.
The idea also seemed attractive: it promised modernization with no need to revamp much, with no need to change our frameworks much, and thus with no need to re-think or re-create anything much. Many companies then started implementing two-speed IT approaches, working to make their legacy backbones work along with their brand-new “business-facing IT”. Happy times; in the end we are all fond of risk aversion.
From a process engineering standpoint, the Bottleneck Phenomenon brings up some noise when looking into frameworks like these. Two-speed advocates will discuss around the design of critical paths to improve overall performance, but this brings an added complexity in processes which are required to be more agile by the minute. Brand new “business-facing” fast infrastructure will then have to interact with legacy architectures back and forth, in a hurry up and wait dynamic, that will only get worse as more interactions our operation demands.
Interactions between our legacy stable backbones and our faster newer layers in two-speed architectures were not supposed to happen that often. Traditional “Waterfall” project frameworks are not iterative in nature, they test at the end, their goals are quite static, and they allow for the planning of a critical project route that will minimize the impact of slow agents.
Business kept evolving, and the need for agility and innovation flooded all industries, ubiquitously backed by IT. Frameworks shifted in response, and companies started looking at different ways of getting things done, more aligned with new market requirements. Waterfall gave pace to Agile in the IT realm, what holds a strong analogy with Design Thinking at a company level:
The conflict between two-speed IT and modern Project Management can be seen often in the number of interactions against less productive agents that we mentioned before. Iterative methods demand bouncing more often against all players, allowing less time to optimize flows, and thus bringing overall performance down.
The main rationale behind 2-speed IT is the need to leave the “stable” backbones of our infrastructures untouched, while we build up the “agile” mechanisms that will drive our business. In a previous article, I discussed this co-existence between Agility and Stability in the modern enterprise, how public and private clouds may enable both, and the need for these two to be compatible in order to achieve both Stability and Agility. A modern approach should be addressed to obtain a holistic high performing outcome, that will enable us to efficiently address projects, meeting business objectives faster and with less risk.
Investment in IT infrastructure should then consider not only the “front-end”, but also the backbones of our infrastructure. Nothing different to what we do when optimizing any operation in a company. Imagine you are a manufacturer with 2 warehouses across the country. They are both old, with no automation, no quality standards, fully “push” in their operating nature. We now look for a third warehouse, and of course we realize that a newer one, with automated processes, real-time monitoring of locations and flows, CPFR enabled, etc., will better serve our operation. Should this be enough? Or should we also invest in the modernization of the former? In the end we should aim at our entire supply chain being monitored through a single SCM instance. This requires modernization of all of our agents. There may be average, and “state-of-the-art” systems, but our whole infrastructure will tend towards modernization.
Moving back to the IT world; would a brand-new public cloud strategy, completely disconnected from our private cloud one, serve to our purposes? Or would we be silently building a two-speed operation? Let’s say we want to develop the finest customer loyalty program in our industry; what will our efficiency be when continuously cycling from one environment to the other, to develop, test, deploy, once and again, striving towards a flawless result?
When those environments have different performance, SLAs, standards, architectures, the impact on innovation and operation is not good.
Two-Speed has been a great intermediate stage, maybe when IT wasn’t an omnipresent force underlying each and every corner of our business. Its philosophy is ad-hoc in nature, and brings challenges when setting expectations for performance, efficiency, frameworks, or even when recruiting IT staff to take care of our “old” architecture.
Gearboxes, those crucial and extremely complex pieces of engineering, can only have one gear working at a time; that’s why they shift. In today’s business, two-speed architectures will require those shifts to happen constantly and seamlessly, therefore requiring complex engineering in the middle for this to happen. This greater complexity is the backfire that two-speed IT will have. Can companies afford this?
It is known that if we take a holistic approach to strategy, this holding true not only for IT, but for any business process, we will achieve overall more sustainable and reliable performance. Only when we embrace modern project frameworks in all of our company’s ventures, or a sustainable Cloud strategy in the case of IT, business evolution will stop being a source of pain, to become a source of opportunity.