X

Break New Ground

Why You Want Your Systems Management to Move to the Cloud

Dan Koloski
Vice President

In the latest edition of Oracle predictions—as profiled on developer.com—Oracle has stated that we expect 60% of IT organizations will move systems management to the cloud by 2020. If you write code for a living, it's in your best interests to champion this move as fast and hard as you can.

Here's why...writing the world's most beautiful code is all for naught if your operations team won't let you put it into production, or if it's put into production as unmanaged code that then provides a terrible user experience. And in most organizations, the disconnect between the speed of DevOps pipeline and the rigidity of systems management discipline almost guarantees one of those two outcomes.

Consider the following data points from a summer 2016 Forrester Survey:

  • Only 4% of I&O respondents believe that their business is very satisfied with the time it takes to release new features or changes to customer-facing business services and applications.
  • Only 11% of teams have real-time dashboards that show release and change pipelines and updated topologies, and even those dashboards typically are not complete.
  • 32% of production apps have problems that are often only discovered when they are reported by customers.
  • Translation: we know we need to move faster, but we don't have visibility, and moving without visibility is often disastrous for users.

How did it get so bad? The reason is simple: most IT operations organizations and corresponding systems management software were designed before the era of virtualization and cloud. Their primary objective was to keep systems stable and only allow infrequent code refreshes. So speed not being an issue, they employed complicated and labor-intensive activities like updating CMDBs and monitoring instrumentation to match the latest build, or running ongoing analysis of production capacity and dependency mapping.

And their high-maintenance on-premises systems management software matched this cadence. Traditional tooling falls into 2 basic categories: single-pane-of-glass repositories that carry a huge human labor cost to maintain or domain-specific tooling that is smarter but only looks at a small piece of the overall data pie. In either case, they rely on substantial human effort to collect, correlate, interpret and understand the meaning of vast amounts of operational data.

When we were rolling out code on predictable schedules once every quarter, or year, these human-based processes and legacy tools fit right in. But in a DevOps scenario, as we are iterating faster and promoting new features every day, these manual processes can't keep up. So we're left with 2 bad choices...wait for them to catch up, or go around them.

In summer 2016, IDC noted, "IDC's research shows that 92% of enterprise IT organizations currently have one or more monitoring tools, yet 55% recognize that they need new solutions designed for the scale and complexity of the era of digital business, hybrid cloud and big data."

Bottom line: we need to do better, and the good news is that the industry has come up with a better way. New cloud-based systems management solutions leverage huge compute farms of machine learning against a unified operational data set to remove the dependence on human factors, making operations move as fast as development. In these offerings, purpose-built machine learning can be continuously run against every scrap of operational data (metrics, configs, logs, events, etc.) to allow monitoring to continuously discover and adapt itself to new topology changes, to automatically understand seasonally-adjusted expected behavior and flag anomalies and to allow for real-time capacity planning to optimize use of elastic cloud resources. In other words, most of those labor-intensive manual processes can be run by the machine learning regime.

In order to take advantage of machine learning, we need to unify and normalize all of the operational data. Today that data sits in dozens of independent repositories (APM tools, log analytics tools, debugging consoles, cloud service consoles, etc. etc.). Unifying that data on-premises requires massive amounts of storage and compute and whole teams of data scientists, but new cloud-based solutions can massively ingest all of that data with no work for the customers.

When all of the data is in one place and machine learning is constantly running, IT Ops can move as fast as Development. In other words, the Ops portion of DevOps can synchronize with the Dev portion. That means fewer delays in promoting that beautiful code into production, and assurance that that beautiful code will be properly managed, providing a good user experience...which, of course, is the whole point.

So, if you write code for a living, encourage your colleagues in operations to move their systems management regimes to the cloud as soon as possible. Both you and they will be happier for it.

Learn more about Oracle Management Cloud.

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.