X

The Integration blog covers the latest in product updates, best practices, customer stories, and more.

  • October 18, 2017

Use a Layered Approach with APIs to Protect your Data

Robert Wunderlich
Product Strategy Director

We often talk about wonderful opportunities like monetization, re-use, etc of data through APIs, but a critical concern is security. To meet security requirements many vendors will provide products like API gateways. Many API gateways provide a wide-range of functionality but should all security checks happen in one place?

There is an allure of having a "one size fits all" approach mainly with what may seem to be simpler configuration. Learning only one tool and using that tool for complete end-to-end, API to Service Implementation may seem attractive. When subjecting our APIs to web-scale workloads however, we may find that this approach will not scale.

I talked about this in another post that focused on the API Gateway and Integration Layers, but now, let's take a step back to include more of the invocation flow. Our discussion in this post will include load balancers and content delivery networks (CDNs). We will also talk about some of the principles of enterprise deployments and why layers protect our most precious commodity, data.

Speaking of data, let's say we have some sort of data store, such as a database. We would not likely just open up the database listener directly up to the Internet. At the very minimum, we would have a firewall in between and more likely we have have multiple tiers in between.

Data is the most valuable resource, deserving the most protection. The diagram above shows multiple layers before anything from the public Internet can reach the most secure data. Clients cannot go directly to data, they have to pass through intermediaries and of course can only perform certain prescribed functions, such as invoking an API.

One may ask if we can simplify this process and collapse multiple functions into one tier. Let's look at each tier and its value

Content Delivery Network/Public Facing Firewall and Load Balancer

  • This is the outermost public facing end-point.
  • There are some validations that can take place here before allowing a request to proceed.
  • This level can check the request for all sorts of exploits including but not limited to SQL Injection, Cross Site Scripting, GeoIP blocking, HTTP Protocol Violations, etc. Essentially this level is validating the HTTP request at the level of resource, headers, and attributes like client IP.
  • Protect against Denial of Service Attacks (DOS), Distributed Denial of Service Attacks (DDOS)
  • This can also be used for image caching, HTTP compression, SSL Termination, etc
  • May enforce SSO and establish a session/transaction token
  • May also hide/modify errors, etc. For example, not show internal server implementation details in error messages returned to the client, but log for the administrators to be able to diagnose and resolve
  • Logging and Analytics

Demilitarized Zone (DMZ): API Gateways, B2B Gateways, FTP Proxies, etc

Requests that enter the DMZ, have the following features

  1. Cleared of the highest-level exploits: Somewhat, but not fully trusted.
  2. Require further processing: The request was not for cached content so the CDN tier forwarded it to the DMZ.

The gateways in the DMZ may proceed to validate and handle the request, performing functions such as

  • Key validation: Application association
  • Throttling/Rate limiting: Protect back-end systems, or enforce contracts
  • AuthN/AuthZ: Methods like OAuth2 to validate if the client is authenticated and authorized to call a particular resource
  • Transformation: Lightweight message modification
  • Redaction: Protecting data
  • Routing: Identifying back-end services to receive the requests
  • Caching: Some non-sensitive/redacted data
  • Logging and Analytics

Green-zone/Application Tier

Requests that have made it here have been validated and have been cleared of exploits. They can now be received by applications, service implementations or the Integration Platform which may perform the following:

  • Connect to legacy applications: Not all applications or technologies are service enabled. Sometimes an adapter is required
  • Fine-grained authorization: Deeper level entitlements with access manager systems or the security layers of the applications themselves.
  • Heavy-weight transformations: Performing complex mappings of a SOAP/XML message and turning it into a concise REST/JSON message for example
  • Orchestrations: Connecting to multiple back-end systems to provide a single service
  • Caching: Maintaining shaped results for continued calls under the principles of eventual consistency
  • Logging and Analytics

Data Tier

Applications, service implementations, and the integration platform make connections to data stores.

  • Only authorized servers either through IP filtering, router segmentation, or some sort of shared key security along with credentials
  • Data auditing
  • Role-based access control
  • Logging and Analytics

The Layered Approach to Security

Returning to the question regarding the possibility of collapsing some of the layers and use one platform to complete all, or most of the functions detailed above. Again the allure is to try to have a simpler configuration. Technically, the answer is "yes" this is possible. There are a few reasons why we want to maintain a fully layered approach.

First, if we look at any security methodology in the physical world, we will find multiple layers. Office buildings, airports, military installations for example employ layers both for security and also for efficiency.

To use an airport as an example, if I am taking a flight, I have a boarding pass. Let's say I have some sort of priority boarding. When entering the priority security line, an agent may want to see my pass just to check to see if I have the appropriate mark indicating that I have priority boarding. That agent is not validating that I can get on the plane, rather preforming a quick, cursory check to redirect me immediately if I got in the wrong line. This is an example of fail fast and also reduces the load on the downstream by rejecting my request based on the simplest of parameters.

Of course, we might ask if we can use the same product across multiple tiers and again the answer would be "yes" here as well. A word of caution though, the more functionality that is packed into any one component expands the attack surface as well as opens up to common performance issues. The more functions a particular technology performs the heavier it tends to be. There is also often a number of trade-offs as features begin to conflict.

To revisit the airport analogy, the person who first looks at my boarding pass does not clear me to board my specific flight. Imagine having one person handling the security checks and boarding for all gates and all airlines? We could have multiple people assigned, but their task would quickly become more complex to handle gate changes, boarding groups etc.

By design, I will have multiple interactions with staff specifically trained for their function as I make my way from the departures zone to the aircraft.

Scalability

In the airport analogy, we were talking about layered security and a certain level of QOS by rejecting requests (entering a security line) earlier rather than later. In our layered approach from a technology perspective, we can not only reject invalid requests, but we can also return the requested results in some cases, without having to go all the way back to the data tier.

At each step of the way, we can employ caching and also horizontally scale platforms to be able to handle requests beyond the capacity of the back-end systems. Furthermore, we can speed up calculated results. Let's say we have an orchestration that calls multiple back-end services to provide a result. We can cache that result and offload subsequent requests to the integration tier for example.

A caveat about caching: Cached data is data at rest. We need to be vigilant about what we cache where. Non sensitive content such as images of products in a catalog for example, could be cached at the outermost layer. Business sensitive data on the other-hand should not be cached in the DMZ so the integration platform can manage this level within the green-zone.

Opportunities

While capable of extreme performance serving the most critical business requirements, enterprise deployments can be complex. Having multiple disparate services requires more planning, configuration and management on the part of administrators. Furthermore, having different tiers can result in silos of information, making it more difficult for administrators to monitor and manage the infrastructure. This all can lead to a greater risk of overlooked vulnerabilities. This goes against the rational for having a proper enterprise deployment.

Fortunately, there has been a move to decouple the user-experience from the processing engines and to lift the burdens of management from users. This is moving in the right direction and there are some opportunities going forward.

  • Artificial Intelligence: Analytics needs to go beyond just showing logs, charts and alerts. Applying AI, patterns can be identified to detect threat vectors not yet known.
  • Solution based design and implementation: By providing common solution-based canvases to designers and implementers, the underlying complexity can be abstracted. Users should not have to think of the underlying technology as much. A common question by users is "for use-case 1, should I use product X or product Y". The user should just be thinking of how to solve the use-case, and the vendor should handle the selection of the underlying product(s).
  • Single-pane of glass: Provide users visibility across the multiple technologies. This increases understanding and reduces complexity.
  • Go Autonomous: Oracle announced the first Autonomous Database which is a huge leap forward in reducing costs and complexity. The more we make platforms autonomous, the better and safer they will be.

Conclusion

Using a layered topology is critical to ensuring security and scalability. While these topologies can be more complex, hybrid deployment offerings like Oracle Integration Cloud Service and Oracle API Platform Cloud Service reduce the complexity of Integration and API Management. With all of the new technologies ranging from microservices, to containers to serverless computing, the hybrid deployment approach will become more critical than ever. I believe this is just the start of great things to come.

Disclaimer

The content above does not necessarily represent Oracle Corporation. All statements are my own.

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.