Recently a customer asked me about extending Oracle Integration Cloud (OIC), specifically what to do when adapters don’t exist for a target system or how to build their own services that can be consumed from OIC. These questions led us to a proof of concept based on the idea of creating a set of cloud native services within a tenancy that can serve multiple purposes.

This blog covers what we wanted to accomplish and then goes through the main steps for doing it. The result of this engagement was an extensible reference architecture that you can use for Oracle Integration projects and other workloads that you might run on Oracle Cloud Infrastructure (OCI).

Basic requirements

The customer’s tenancy had the following basics already in place:

  • Several Oracle Integration instances (one per lifecycle stage)

  • Federated identity through Okta

  • FastConnect and IPSec VPN for backup

  • Dynamic routing gateway (DRG) for on-premises connectivity

  • A hub and spoke network architecture

New use cases around migrating over existing SOA composites from the on-premises installation of SOA onto OIC appears. For example, a PostgreSQL database needs data inserted to it as part of an OIC flow.

Upon suggesting that we REST-enable this database, the customer indicated their preference to avoid standing infrastructure, such as Kubernetes, on which this configuration would need to run. Maintaining cloud-based infrastructure is a no-fly zone, so we needed to reach deeper into our bag of tricks.

Because an integration process often doesn’t have a human consumer, some latency is acceptable. We also needed to potentially expose some of the services that we might build, including services exposed by OIC itself, to the public internet. This exposure introduces risk to mitigate.

From these requirements, I took the following core principals as success criteria:

  • Securely expose select services to trading partners.

  • Minimize the use of non-managed cloud infrastructure.

  • Use cloud native computing principles, such as loose coupling, serverless, open standards, and so on.

Forming an architecture

I started down the path of investigating a combination of Oracle Functions, Oracle Streaming service, and Oracle API Gateway, three products that OCI offers as fully managed services. Together, you can configure and expose these services to provide both security and obscurity. You can also configure API Gateway to communicate with any OAuth 2.0 provider, which was convenient with the customer’s current environment.

But what about networking? The customer had already set up a hub and spoke VCN but had no spokes yet. The hub refers to all traffic from on-premises traverses a central point and, the traffic is routable to new VCNs through created local peering gateways (LPGs).

During the proof of concept, the networking situation changed with the addition of what Oracle calls enhanced cloud networking. For our purposes, this addition opened the possibility of a multispoke network architecture, de-emphasizing a hub VCN that all traffic would flow through. The new hub is the dynamic routing gateway (DRG) itself, which now allows for multiple VCNs, FastConnect, and IPSec to be attached and allows for all the intra-VCN and VCN to on-premises routing to be configured.

We built out a DRG and connected a few spoke VCNs that housed the cloud native services, such as API Gateway, Oracle Functions, and Oracle Streaming, and another that only contained only the public access components, such as a load balancer. We also built a simple services VCN. The following graphic depicts the high-level architecture.

A graphic depicting the architecture connecting the OCI region to the customer data center.

We separated both public access and Oracle services access into separate VCNs. If public access is allowed, you have an extra layer of protection behind an Oracle web application firewall, rules on which you can customize per application or workload. The services VCN exists solely to provide access to Oracle Services Network (OSN) from other on-premises workloads that need access to Object Storage and other services.

Going deeper

Because we’re talking about cloud native, we need a little more detail for the VCN that encompasses these technologies. In the following graphic, all the subnets are private, meaning that outside access comes through the DRG, either from outside (public VCN) or on-premises (through VPN or FastConnect). So, no internet gateway exists here. Within this VCN, the development team builds out workloads based on the technologies of choice.

In our example, we stood up an autonomous database in a private subnet with access through an NSG only from workloads that require access. Also depicted is a private subnet for Oracle streams, which can house the Kafka-compatible Oracle Streaming service. As with Autonomous Database service, the actual service elements are fully managed by Oracle, so the endpoint is all we put into our VCN.

To make services accessible from Oracle Integration (or other consumers), both an API Gateway and OIC Agent are shown. During the proof of concept, we used API Gateway to provide Oauth2 authentication and authorization to REST-based services we expose. The built-in OIC adapters for database and streaming can’t reach private subnets directly, so the agent virtual machine provides this connectivity.

A graphic depicting the cloud native private VCN.

What About Oracle Integration?

This blog was about OIC, right? Yes and no. Our goal was not to change the way that Oracle Integration works, but to enhance it by offering more capabilities that aren’t available as native adapters. Giving the development staff the ability to quickly author, deploy, and access serverless functions is a huge boost. I refer to these functions as a Swiss Army knife because they can get you what you need, safely and securely exposed as a REST API that OIC can easily integrate with.

One tested example was Postgres. The customer said that they needed an entry in a table out in Postgres as part of an integration flow. As mentioned, standing up a REST API and maintaining it wasn’t an option. So, we used a serverless function that accessed Oracle Key Vault for secure storage of Postgres credentials and used a Python API to access the database for read and write, then returned. The OIC flow didn’t mind the extra few seconds required to deploy the serverless function, and the call went through the API Gateway for added logging and visibility.

Because I mentioned it earlier, a quick note on Streaming service: OIC offers a direct adapter for Oracle streams. As part of our cloud native VCN, we can use the Kafka compatibility layer within Oracle streams to integrate with (or replace) the customer’s existing Kafka infrastructure. OIC doesn’t need to know or care about the underlying technology, so we can use any type of cloud native technology to achieve our goal. But having everything built into the OCI tenancy is a huge win.

Other thoughts

This picture is a smaller version of a larger architecture we built for this customer. The full architecture includes multiregion support, FastConnect as a DRG attachment, custom routing rules to control what traffic can pass from VCN to VCN and to on-premises, and other spokes for E-Business Suite, SOA, Kubernetes clusters, and other distinct workloads. If the customer chose to adopt OCI solely as an extension of their on-premises environment, they could remove the public access VCN. In that case, all traffic to and from OCI traverses the dynamic routing gateway.

As part of this proof of concept, we deployed the CIS Landing Zone, which is a great starting point for any OCI tenancy. We also included building components out with Terraform, use of Oracle Resource Manager, and use of Oracle Logging Analytics. For more details, see this blog on API Gateway log analysis.

Conclusion

This experience showed the customer (and myself) how capable Oracle Cloud Infrastructure is when running multiple workloads. We proved that it can meet all their core requirements and then some.