X

Break New Ground

Hot Database Connections for Serverless Functions

Kuassi Mensah
Director Product Management

Preamble — Problems to Solve

Firing up new Serverless containers — a.k.a. cold start — takes from one to several seconds (time varies per platform); to eliminate such cost/latency, Serverless frameworks keep already started containers warm for a period of time (duration varies per provider).
Serverless functions might make database access. Although less costly than starting new Serverless container, database connection creation and tear down may cost tens or even hundreds of milliseconds depending on your DBMS environment. This problem is exacerbated with Serverless functions that are are short-lived and cannot afford such cost on every call.
The first problem to solve is avoiding connection creation and tear down.

The other problem to solve, as connections cannot be pooled/shared cross containers, is the impact of auto-scaling and high concurrency that is thousands of concurrent Serverless functions calls, in the face of a fixed number of database connections.

The discussion, solutions and best practices in this post are based on Java but apply to all languages supported by your Serverless infrastructure (for example Oracle Functions supports Java, Go, Node.js, Python and Ruby).

Avoiding the Connection Creation/Tear-Down

Reusing an existing connection takes only a sub-up-to-a-couple milliseconds however, each instance of a Serverless function runs in a separate container with a single database connection therefore, those connections cannot be pooled and shared across containers.
Fortunately, Serverless containers are kept around for several minutes upon the end of the function call; in addition, the Serverless programming model allows the reuse of execution context and states declared outside of the handleRequest(), the entry method which is invoked at every call.

We can cache the single database connection (that is a pool of size 1) in the class constructor for reuse by the next calls as long as the container sticks around.

The complete Java code of my adb-ucp function is adapted from Todd Sharp’s post on “Oracle Functions — Connecting To An ATP Database”.

Todd’s post contains all the steps (not duplicated here) for configuring, deploying and invoking Oracle Functions and the ATP-S database including the connect strings, Cloud credentials, and so on. I’ve replaced the query on the EMPLOYEES table with a query on ALL_TABLE (a dictionary table) thereby removing the need to create a table before running the example. I named both the application and the function “adb-ucp”, and measured the time to retrieve a connection from the pool, that is the getConnection() method call.
The following commands let you deploy and invoke the function:

As Serverless functions display only the returned data, to see the logs and printouts, you need to subscribe to a logging service such as the Oracle Cloud Infrastructure Logging Service or PaperTrail.

Setting up a pool of size 1 in each container avoids connection creation and tear-down however, this results in a one to one mapping between function instances and database sessions; some inactive/warm functions might even be holding onto database connections. Such solution works well for small scale deployments but will eventually incur wait or exception in large scale deployments where the number of concurrently active or warm function instances is larger than the finite number of available database sessions.

Sustaining Auto-Scaling/High Concurrency

As indicated in the preamble, setting up a pool of one dedicated connection in each container prevents pooling/sharing connections across containers; this one-to-one mapping between containers and RDBMS sessions will not and cannot sustain large scale deployments with high concurrency of Serverless functions. The assumption is that not all these functions are making database access simultaneously and that they relinquish the connection when no longer needed — otherwise, this will defeat sharing.
Several solutions exist including: proxy connection servers such as Oracle’s CMAN-TDM (Connection Manager in Traffic Director Mode, this feature is currently not a Cloud service) and MySQL Router; also RDBMS-side connection pool such as Oracle’s Database Resident Connection Pool (DRCP).

These solutions have in common the ability to pool database sessions, across middle-tiers and containers.

Serverless Functions and Database Resident Connection Pool

DRCP is enabled by default with ATP-S. Clients, middle-tiers and containers are “connected” to the connection broker which dispenses idle database sessions, on demand. The connection broker does not stay in the loop upon associating the requestor with an idle database session; closed connections are returned back into the pool.

The ATP-S database furnishes, out of the box, 5 service profiles: high, medium, and low for Reporting or Batch; tp and tpurgent for OLTP. These tnsnames.ora aliases or entry points are similar to JNDI names and hide/virtualize the details of the database services.
Let’s walk through the steps for configuring and using DRCP with the same Oracle Function used in one-to-one mapping, above.

1. Download the Cloud client credentials (see step 1 “Download ATP Wallet” in Todd’s post) and unzip into a local folder

2. Edit the tnsnames.ora file to add a new alias with a DRCP reference (alternatively, you may modify one of the 5 original aliases). In my test, I’ve created a new entry named ofunctions_tpurgent-DRCP, by duplicating the ofuntions_tpurgent description and adding “(SERVER =POOLED)” to the CONNECT_DATA section then saved the tnsnames.ora file.

3. Upload the wallet (again if already done), including the updated tnsnames.ora, as described in . step #2 “ Upload wallet to a Private Bucket in Object Storage” in Todd’s post. In fact I’ve made this change even before uploading the Wallet in the previous section/solution (added a new entry), so I did not need to re-upload it.

4. Update DB_URL in the function configuration to use the new alias as follows

As the function was already deployed, making changes to the config will pick up the new URL and the DRCP. The next step consists in simply invoking the function as in the previous section. You are still using a pool of size 1 in each container however, these pools are attached to the Connection Broker, not to database sessions. There is an additional (or two) millisecond latency with DRCP — compared to using pool of size 1 exclusively — this is the price for going through the connection broker however, you get the benefit of pooling database sessions across all containers connecting to the same ATP-S instance thereby allowing to sustain high concurrency. No idle connection held by warm containers as in non-DRCP case.

Summary

I’ve walked you through the solutions to the problems to solve stated in the preamble that are: (i) avoiding connection creation and tear down at every function call; and (ii) sustaining high concurrency in the face of a finite number of database sessions.
It is obvious that the first database connection request in the life of a container will pay a higher price but subsequent calls get free lunches, or almost!!.

 

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.