Written in collaboration with Leo Leung
In our last post, we provided the background for three tenets of an enterprise cloud, and covered the first tenet around the ability to serve both traditional and cloud-native applications in a single platform. In this post, we'll discuss our approach for measurable savings.
A lot of efficiency gains are attributed to a cloud adoption. But positive effects like lower cost with increased asset utilization, less administrative efforts with a higher level of automation and faster release cycles for new services with agile development are not related to the infrastructure. Efficiency gains are achieved introducing fault-tolerant and auto-scaling software architectures.
In the enterprise segment, mechanisms are required, that allow to combine cloud-native and enterprise workloads on a single platform, helping companies to realize significant cost savings.
Managing availability, bursting and patching applications during runtime is much easier, employing the design of distributed systems. Immutable servers merge the operating system (OS) and application code from multiple sources in a continuous deployment process. Frequent redeployments of server instances help to ensure business continuity while updating or upgrading solution components. Unfortunately, only a small number of existing enterprise solutions can benefit from this approach.
So, let's have a look at the main business case leaver and analyze how a modern infrastructure stack allows to address these efficiency gains in the context of an enterprise.
Workload consolidation is a very effective driver for increased efficiency. Most cloud service providers measure much higher utilization rates on their infrastructure components than traditional data centre operators. Modularization and automation is employed on the OS level and allows for workload consolidation based on consumption profiles.
We introduced inter-process-communication (IPC) as main differentiator between enterprise applications and cloud native workloads. Never the less, harmonizing workload profiles, we need to be a bit more specific and distinguish between three types of workloads. The first group of workloads are applications that utilize IPC mechanisms provided by the operating system. A good example is a database server that controls the read and write process to a disk. The second group are services that employ host-to-host communication. E.g. a time series data base that only stores store value pairs and replicates this information across multiple nodes. The third group are applications that use web sockets and rely on the domain name service to target different hosts, e.g. a web frontend that refers to an HTTP API.
In many enterprises, the advantages of layered architectures have been the guiding principle to plan and implement functional enhancements for decades. Separating the user interface from the business logic, and the business logic from data management allows for flexibility, scalability and maintainability. But usually layering is applied on the application layer not on the OS level. Today, applications do not run on a single machine anymore and layering is required to reflect network structures accordingly. Employing middleware protocols or hypervisor networks only allows for two-tier architectures while harmonizing workload profiles demands for three separate deployment environments.
Network layer protocols separate process-to-process from host-to-host communication and transport layer protocols separate host-to-host from port-to-port communication. Separating these three layers enables operators to build infrastructure stacks that optimize infrastructure consumption with harmonized deployment models. For most enterprise architects, applications, not workloads represent the smallest planning unit. But solutions are not the most suitable delivery object for a cloud data centre. Running applications on shared infrastructure, every workload becomes a distinct network service. While IPC patterns represent consumption profiles for network services, network protocol layers allow to run these network services on optimized infrastructure stacks without giving up on managing solutions as a comprehensive business service.
Web services have proven, that distributed services scale better than large monoliths serving business logic to a wavering group of users. And depending on the number of requests, using deployment automation, the increased asset utilization makes up for the orchestration overhead. Even more, fast iterations of isolated application container are drivers for innovation and became more relevant to the business success than traditional process automation. Therefore, from a TCO perspective, the adoption web architectures developed into an important efficiency driver. Cohesive solution architectures can’t remain on a single infrastructure stack but need to employ the best suited deployment model for the data management, the business logic and the presentation layer.
At Oracle, we believe that effective workload consolidation requires a choice between deployment models without separating different workloads into distinct network domains. Instead of automating the provisioning process for singular components, we focus on the consolidation of workload types throughout the entire infrastructure stack. Therefore, we embraced a flat network design that allows to combine engineered systems, physical and virtual nodes or even container in one network domain. A centralized data base cluster can be combined with auto scaling web applications, without sending service requests for manual router configurations.
Oracle’s cloud infrastructure provides choices for operators consolidating workloads with modularization and automation but without constraining developers to a single deployment model. Our customers can closely control dedicated infrastructure even though they rely on Oracle to run the hardware. They can do both, scale up or down, where it is required and scale in or out, where it is possible. They can dedicate hardware to individual applications or commit entire pools on a single platform. Developers continue to determine the best suitable deployment model, while operators select the most suitable infrastructure stack for every type of workload.
One of the main objectives for a cloud migration is the reduction of operational cost. But migrating existing solutions often requires refactoring, if not reprogramming of applications. Seldom this is supported by a positive business case. Long term benefits are constrained by short term goals. The challenge for many CIO is, to increase the number of solution components that the comply with the design of cloud native services, without introducing upfront cost.
Cloud services are build, following the design pattern of distributed systems. Workload isolation enables agile development, deployment automation allows for elasticity, required to collect massive amounts of unstructured data, e.g. for IoT solutions. And monetization of virtual infrastructure allows to implement information driven business models, hence using IT services as revenue generator. But make-or-buy decisions for cloud services are often constrained by the ability to interface with applications that do not run in the public cloud. While API management is a well addressed field, building meaningful API server remains an issue.
Adopting the design of web service back ends for business integration purposes is a promising, new approach towards “Service Oriented Architectures (SOA)”. However, Micro-services only serve the purpose, being deployed in network domains with access to existing applications. On shared infrastructure stacks that is seldom the case. In order to benefit from operations oriented architectures many companies have started to implement an HTTP-API portfolio (application programming interfaces exposed on the HTTP layer). The value of APIs depends significantly on the ability to process business relevant data, and for enterprises most of this data resides in existing applications. This demands an integration concept that connects public with private network domains without creating security breaches and allows developers to rewrite interfaces without changing the existing code. Releasing isolated Development, Test and Production environments with access to business critical data, leads to faster, agile development and reduces the requirements backlog.
Businesses become faster and more flexible in delivering data and business functionality to partners, suppliers and customers. A cloud native API server can acquire and distribute data from and to multiple sources in an asynchronous fashion and translates a session oriented communications process into the continuous bit-stream that a relational database server expects. This allows operators of legacy applications to transition to cloud native workloads at their own pace.
At Oracle, we propose our customer to start with a lift and shift of existing applications to a dedicated infrastructure stack. We provide tools that wrap entire solution architectures onsite and help to re-deploy the same footprint on our cloud infrastructure stack. In many cases, that is a pre-requisite to consolidate data base workloads. A centralized data base cluster than allows to connect cloud native workloads via network protocols in order to host server that produce HTTP-API. This environment can be extended with containers that host distinct functions or apps. Adopting Micro-services becomes an incremental approach and is no longer a rip and replace scenario.
Many cloud providers have adopted an impressive portfolio of management services for virtualized workloads. Nevertheless, relying on service management tools that are provided by the cloud provider creates dependencies that usually in- and not decreased the total-cost-of-ownership (TCO). Enterprises rely on dedicated infrastructure and employing proprietary services creates supplier dependencies. For many enterprises, open source tools are a better alternative when building new capabilities like isolation, elasticity and monetization.
Even though, introducing new ITSM capabilities appears more cost efficient relying on operational services, building a homogenous toolset that works across infrastructure stacks, interfacing with existing ITSM toolsets introduces costs that are often forgotten. Managing risk and compliance for immutable server depends on the ability to define business specific policies, releasing infrastructure components. Applying the same policies at multiple service provider, becomes a challenge when the deployment mechanisms employ different operational tools. This leads to additional cost that are often underestimated. Oracle has chosen a different path, we integrate open source software for end-to-end deployment automation to reduce operating expenses without creating supplier dependencies.
While successful many web service companies can rely on operation services that public cloud provider build for shared infrastructure, enterprises need operating environments that work across infrastructure provider. Proprietary services are not sufficient, isolation, elasticity and monetization needs to be enabled on dedicated infrastructure.
We consciously avoided building large portals with numerous operational service management interfaces, but focused our development efforts on a management- and audit-API that allows enterprise customers to integrate the cloud controller with their existing operational toolset. Assigning dedicated infrastructure pools even for volatile workloads enables the use of existing ITSM procedures. Even cloud-native apps can thus be operated using private implementations of well-known open source tools. At Oracle, we have dedicated developers to build interfaces like a Terraform provider, Chef plugins and to implement Kubernetes as orchestrator for Docker container.