Break New Ground

An API First Approach to Microservices Development

Boris Scholl
VP Development - Microservices, Oracle Cloud

By Claudio Caldato, Sr. Director Development and Boris Scholl, VP Development - Microservices, Oracle Cloud


Over the last couple of years our work on various microservices platforms in the cloud has brought us into close collaboration and engagement with many customers and as a result we have developed a deep understanding of what developers struggle with when adopting microservices architectures in addition to a deep knowledge of distributed systems. A major motivation for joining Oracle, besides working with a great team of very smart people from startups, Amazon and Microsoft, was the opportunity to build from scratch a platform based on open source components that truly addresses the developer. In this initial blog post on our new platform we will describe what was driving the design of our platform, and present an overview of the architecture. 

What developers are looking for

Moving to microservices is not an easy transition for developers that have been building applications using more traditional methods. There are a lot of new concepts and details developers need to become familiar with and consider when they design a distributed application, which is what a microservice application is. Throw containers and orchestrators into the mix and it becomes clear why many developers struggle to adapt to this new world.  

Developers now need to think about their applications in terms of a distributed system with a lot of moving parts; as a result, challenges such as resiliency, idempotency and eventual consistency, just to name a few, are important aspects they now need to take into account. 

In addition, with the latest trends in microservices design and best practices, they also need to learn about containers and orchestrators to make their applications and services work. Modern cluster management and container orchestration solutions such as Kubernetes, Mesos/Marathon or Docker Swarm are improving over time, which simplifies things such as networking, service discovery, etc., but they are still an infrastructure play. The main goal of these tools and technologies is to handle the process of deploying and connecting services, and guarantee that they keep running in case of failures. These aspects are more connected with the infrastructure used to host the services than the actual services themselves. Developers need to have a solid understanding of how orchestrators work, and they need to take that into account when they build services. Programming model and infrastructure are entangled; there is no clear separation, and developers need to understand the underlying infrastructure to make their services work. 

One obvious thing that we have heard repeatedly from our customers and the open source community is that developers really want to focus on the development of the logic, not on the code necessary to handle the execution environment where the service will be deployed, but what does that really mean?  

It means that above all, developers want to focus on APIs (the only thing needed to connect to another service), develop their services in a reactive style, and sometimes just use ‘functions’ to perform simple operations, when deploying and managing more complex services involves too much overhead.  

There is also a strong preference among developers to have a platform built on an OSS stack to avoid vendor lock-in, and to enable hybrid scenarios where public cloud is used in conjunction with on-premise infrastructure.  

It was the copious feedback heard from customers and developers that served as our main motivation to create an API-first microservices platform, and it is based on the following key requirements: 

  • Developers can focus solely on writing code: API-first approach 
  • It combines the traditional REST-based programming model with a modern reactive event-driven model  
  • It consolidates traditional container-based microservices with a serverless/FaaS infrastructure, offering more flexibility so developers can pick the right tool for the job 
  • Easy onboarding of 'external' services so developers can leverage things such as cloud services, and can connect to legacy or 3rd party services easily 

We were asked many times how we would describe our platform as it covers more than just microservices, so in a humorous moment, we came up with the Grand Unified Theory of Container Native Development


The Platform Approach 

So what does the platform look like and what components are being used? Before we get into the details let’s look at our fundamental principles for building out this platform:

  • Opinionated and open: make it easy for developers to get productive right away, but also provide the option to go deep in the stack or even replace modules. 
  • Cloud vendor agnostic: although the platform will work best on our New Application Development Stack customers need to be able to install it on top of any cloud infrastructure. 
  • Open source-based stack: we are strong believers in OSS, and our stack is entirely built upon popular OSS components and will be available as OSS 

The Platform Architecture 

Figure 1 shows the high level architecture of our platform and the functionality of each component. 

Let’s look at all the major components of the platform. We start with the API registry as it changes how developers think about, build, and consume microservices. 

API Registry: 

The API registry stores all the information about available APIs in the cluster. Developers can publish an API to make it easier for other developers to use their service. Developers can search for a particular service or function (if there is a serverless framework installed in the cluster). Developers can test an API against a mock service even though the real service is not ready or deployed yet. To connect to a microservice or function in the cluster, developers can generate a client library in various languages. The client library is integrated into the source code and used to call the service. It will always automatically discover the endpoint in the cluster at runtime so developers don’t have to deal with infrastructure details such as IP address or port number that may change over the lifecycle of the service.  In future versions, we plan to add the ability for developers to set security and routing policies directly in the API registry. 

Event Manager: 

The event manager allows services and functions to publish events that other services and functions can subscribe to. It is the key component that enables an event-driven programming model where EventProviders publish events, and consumers – either functions or microservices – consume them. With the EventManager developers can combine both a traditional REST-based programming model with a reactive/event-driven model in a consolidated platform that offers a consistent experience in terms of workflow and tools. 

Service Broker: 

In our transition to working for a major cloud vendor, we have seen that many customers choose to use managed cloud services instead of running and operating their services themselves on a Kubernetes cluster. A popular example of this is Redis cache, offered as a managed service by almost all major cloud providers. As a result, it is very common that a microservice-based application not only consists of services developed by the development team but also of managed cloud services. Kubernetes has introduced a great new feature called service catalog which allows the consumption of external services within a Kubernetes cluster. We have extended our initial design to not only configure the access to external services, but also to register user services with the API registry, so that developers can easily consume them along with the managed services. 

In this way external services, such as the ones provided by the cloud vendor, can be consumed like any other service in the cluster with developers using the same workflow: identify the APIs they want to use, generate the client library, and use it to handle the actual communication with the service. 

Service Broker is also our way to help developers engaged in modernizing their existing infrastructure, for instance by enabling them to package their existing code in containers that can be deployed in the cluster. We are also considering solving for scenarios in which there are existing applications that cannot be modernized; in this case, the Service Broker can be used to ‘expose’ a proxy service that publishes a set of APIs in the API Registry, thereby making the consumption of the external/legacy system similar to using any other microservice in the cluster.  

Kubernetes and Istio: 

We chose Kubernetes as the basis for our platform as it is emerging as the most popular container management platform to run microservices. Another important factor is that the community around Kubernetes is growing rapidly, and that there is Kubernetes support with every major cloud vendor.   

As mentioned before one of our main goals is to reduce complexity for developers. Managing communications among multiple microservices can be a challenging task. For this reason, we determined that we needed to add Istio as a service mesh to our platform. With Istio we get monitoring, diagnostics, complex routing, resiliency and policies for free. This removes a big burden from developers as they would otherwise need to implement those features; with Istio, they are now available at the platform level. 


Monitoring is an important component of a microservices platform. With potentially a lot of moving parts, the system requires having a way to monitor its behavior at runtime. For our microservices platform we chose to offer an out-of-the-box monitoring solution which is, like the other components in our platform, based on well consolidated and battle-tested technologies such as Prometheus, Zipkin/Jaeger, Grafana and Vizsceral. 

In the spirit of pushing the API-first approach to monitoring as well, our monitoring solution offers developers the ability to see how microservices are connected to each other (via Vizsceral), see data flowing across them and, in the future, will show insight into which APIs have been used. Developers can then use distributed tracing information in Zipkin/Jaeger to investigate potential latency issues or improve the efficiency of their services. In the future, we plan to add integration with other services. For instance, we will add the ability to correlate requests between microservices with data structures inside the JVM so developers can optimize across multiple microservices by following how data is being processed for each request. 

What’s Next? 

This is an initial overview of our new platform and some insight into our motivation, and the design guidelines that we used. We will follow with more blogs that will go deeper into the various aspects of the platform as we get closer to our initial OSS release early 2018. Meanwhile, please take a look at our JavaOne session

For more background on this topic, please see our other blog posts in the Getting Started with Microservices series. Part 1 discusses some of the main advantages of microservices, and touches on some areas to consider when working with them. Part 2 considers how containers fit into the microservices story. Part 3 looks at some basic patterns and best practices for implementing microservices. Part 4 examines the critical aspects of using DevOps principles and practices with containerized microservices. 

Related content

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.