article by Abhay Bhavsar (September 2023)

Overview and benefits of deploying oracle digital assistant custom components to Oracle Kubernetes Engine

The most common deployment of custom components in Oracle Digital Assistant is to the embedded container in Oracle Digital Assistant skills. The popularity of deploying custom components to the embedded container is largely due to ease of use and, in the case of Oracle SaaS skills, because customers can pull the components with the skills from the skill store, which allows them to customize the out-of-the-box implementation. 

A typical Oracle Digital Assistant project however usually requires various integrations with back-end services and functional services. The more you build, the more custom code you need to deploy and manage. In addition, you may want to use custom component services within different skills, or in the future have entity event handler code stored in a central location. All good arguments for looking into what a remote custom component deployment can do for you.

The following are key benefits of deploying ODA custom components in Oracle Kubernetes Engine

  1. The custom component get deployed centrally so multiple skills can leverage a common set of components. Developers can build generic libraries and share them across skills within and across projects.
  2. ODA embedded container has an known cold start delay of a few seconds for the very first access to the custom component. Custom components deployed to OKE are always available and do not suffer from cold start issue.
  3. The custom component access is secured through an OCI Vault to store the user id and password. The OCI Vault can also be used to store credentials to authenticate against remote services that require authentication. This is more secure than storing passwords in skill configure parameters or custom component code.
  4. Since OKE custom components are OCI resources you can manage control to other OCI resources and services through resource principal access.
  5. Deploying to OKE involves storing custom component as docker images in OCI Registry further enabling continuous integration and version control of custom component code.
  6. Custom components deployed to OKE in private subnets can have access to various other resources through standards network access policies e.g. ODA calling an ORDS API on an ADW instance that is only available within a specific private VCN.
  7. Customer has better control over the compute power available to their OKE cluster so they can manage the cost of running development and production workloads.
  8. OKE has a high availability and redundancy architecture that is robust and scalable. Any time there is a failure of the pod/cluster/fault domain or availability domain, OKE immediately launches fresh pod(s) to serve user requests.

 

This article is part 2 of the series of 3 articles that describes how to setup an Oracle Kubernetes Engine Cluster (i.e. OKE).

 

Read Full Article (pdf)

Download Resources (zip)

Other posts in the series:

Part 1 of 3 – Setting up an Oracle Kubernetes Cluster on OCI

Part 3 of 3 – Setting up a developer account for deploying custom components to OKE cluster

Author