EMEA A&C CCOE Partner Technology Cloud Engineering

Run Apps on Compute Instances via Docker

Alper Ozisik
Oracle EMEA A&C Cloud Adoption & Implementation Consultant

Containers are brining quite portability for applications. In most of the cases, if you can create a package for a compute instance, you can create the same package for Docker.

In order to have an app already installed to a compute instances you need either to use a custom image or use cloud init script. Custom images lack portability of a Docker image and cloud init installation is slower than just pulling and running a Docker container.

This document proposes using Docker to deploy your apps on compute instances.

Docker images are de facto standard for packaging apps with the necessary runtime. They are much smaller compared to custom images. Since, they do not contain the OS part and you do not need to maintain if you just focus on the Docker side. In order to update the app, you do not need to provision a new compute instance – just pull the new version and create a new container. With this approach, updates are going to happen within seconds compared to minutes.

Updating the application through custom image is more a manual process compared to Docker approach. In order to update the application, you need to change the configuration for new instance configurations. You need to repeat this for every configuration. This will be a mess if your application is being used on multiple setups, which might involve other tenants. With Docker approach, when the necessary tag (e.g. latest) is pushed to registry, all new deployments will automatically point to this.

Cloud init script is essential to customize new compute instance. It can be also used to install necessary run-time and the application. This will take more installation time compared to Docker way.

Versus Kubernetes or Functions approach

Update management and desired keeping the desired state could be maintained in a similar way. Kubernetes and Functions are more suitable for dynamic workloads. This is for any kind of workload.

It is possible to use hardware specific components, which are not applicable to OKE and Functions. You can attach GPU and mount high-performance volumes to container.

When a new node added to OKE, it takes about 15 minutes to have that node being available for the cluster. Most of the time is spent during provisioning, installing components (container runtime, Kubernetes agent and more…) and having that node making itself available for the cluster. Stopped instance will came back online approximately within 40 seconds.

It makes sense to put some limits for containers for OKE and Functions: such as CPU, memory or disk limitations. If you even plan to run a single app, single instance of the app, you do not need to put those limits. In that case, the container can achieve full capacity within the compute instance.

Agent Software

Your application should focus on doing its tasks. Another software should maintain it; an agent. That agent can run within a container and manage other containers. Updating the app, could be just pulling the image every time before running it. Layered structure of the images will make this pull action to be happening in a short time.

When the application stops by an error, the agent can restart it. So it will keep the desired state of the application.

When the application starts running, stopped, it can inform a master controller about it. Streaming service events are good approach for multi-master event structure

It is very easy to start the agent container whenever system boots.

The Setup

Setup an IAM user, with service user account purpose. This account will be used for OCIR login to pull the images.

Install Docker on those worker machine instances. If you have a high-performance disk and want to use it with Docker base installation and any containers created with it, you might mount that path to the Docker installation path beforehand or change it later with some manual tricks.

Make sure you are exposing the port to the host correctly. Most likely you need to configure OS level Firewall, Network Security Group and Security Lists too.

Multi-tenant setup

Customers of your solution might going to install your solution on their own tenants; not yours. Best practice would be creating a new IAM user for each customer and assign them to a group just having correct pull permissions on your repository of the image.

OCIR will automatically will download from the closest region for the instances running within OCI, regardless of the region specified within the URL.


Using containers to run your even single app on the compute instances will make a difference. At first, it will look as if it is more effort to give. During the iterations of the application life cycles, merits of this approach will become observable.

You might be involved with containers for the very first time, or you might already have some experience with it.
We on the EMEA CCOE Partner Technology Cloud Engineering team focusing on Innovation and Modernization, we work with you and assist you in using containers for your applications and point to the best practices of those approaches. Drop us an email to start an engagement.

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.