Oracle Cloud Infrastructure (OCI) Container Instances is a serverless compute service that enables you to quickly and easily run containers without managing any servers. Container Instances runs your containers on serverless compute optimized for container workloads that provides the same isolation as virtual machines.

Container instances are great if you just want to run a container without the need for extra benefits usually provided by using a Kubernetes cluster. A thorough introduction to OCI Container Instances can be found in the documentation and a YouTube Video named “First Principles: Inside OCI Container Instances”.

The following example demonstrates how to create a Container Instance with Terraform. To keep things simple the code is self-contained in a single Terraform file, real-world workloads should follow best practices by using the default directory and file structure. The example was tested using Terraform 1.3.7 using OCI provider 4.107.0 on MacOS.

Overview

The Terraform code in this example creates a tiny Virtual Cloud Network (VCN) with a single public subnet allowing access to a HTTP:2.4 container serving nothing but the default page. This should be enough to cover the basic concepts whilst keeping the length of the article reasonably short. Additional blog posts will explain how to pull images from private container registries, limiting resource usage, and the use of volumes.

Network Diagram

Note that you might incur cost creating these resources.

Configuring the OCI Provider for Terraform

The first step is to configure the connection to the OCI API. This is done by means of a Terraform Provider for OCI. Before you can work with Terraform you need to make sure you have the required API keys and Oracle Cloud IDs (OCID) available. Please have a look at the OCI Provider for Terraform documentation for details. The example uses API-key authentication. To keep the code reusable the provider configuration is provided by environment variables. The first 6 of the following variables are self-explanatory, the 7th isn’t.

The Terraform code accepts a subnet range as “home IP CIDR” such as x.x.x.x/32 for your home IP address. This address range will be allowed to access port 80 in the HTTPd container. Please don’t use 0.0.0.0/0 here as it would open the container to the entire world.

# ------------------------------------------------------------------------------------------------ variables

variable "tenancy_ocid" {}
variable "user_ocid" {}
variable "key_fingerprint" {}
variable "private_key_path" {}
variable "oci_region" {}
variable "compartment_ocid" {}
variable "home_address_cidr" {}

# ------------------------------------------------------------------------------------------------ provider

provider "oci" {

  tenancy_ocid     = var.tenancy_ocid
  user_ocid        = var.user_ocid
  private_key_path = var.private_key_path
  fingerprint      = var.key_fingerprint
  region           = var.oci_region
}

terraform {

  required_providers {
    oci = {
      source  = "oracle/oci"
      version = ">= 4.107.0"
    }
  }
}

In addition to passing the details needed for the oci provider (using API key authentication in this example) the source of the provider is changed to oracle/oci vs the previously used hashicorp/oci location. See Registry and Namespace Change in the documentation for more details.

Configuring a Virtual Cloud Network

No cloud resources can be created without a Virtual Cloud Network (VCN). In this example a minimally viable VCN is created allowing a Container Instances residing in a public subnet to communicate with the outside world – within limits. The Security List associated with the public subnet only allows outgoing HTTPS access for pulling images from container registries. Inbound traffic is allowed on port 80 matching var.home_address_cidr.

# ------------------------------------------------------------------------------------------------ network

#
# the VCN resource
#

resource "oci_core_vcn" "demo_vcn" {

  compartment_id = var.compartment_ocid
  cidr_block     = "10.0.0.0/16"
  display_name   = "demo-vcn"
  dns_label      = "demo"
  freeform_tags = {
    "project-name" = "blogpost"
  }
}

#
# The default security list grants access to the container instance
# and allows pulling images from container registries via HTTPS
#

resource "oci_core_security_list" "public_sn_sl" {

  compartment_id = var.compartment_ocid
  vcn_id         = oci_core_vcn.demo_vcn.id
  display_name   = "demo-vcn - security list for the public subnet"
  ingress_security_rules {

    protocol    = 6
    source_type = "CIDR_BLOCK"
    source      = var.home_address_cidr
    description = "access to container instance port 80 from home"
    tcp_options {

      min = 80
      max = 80
    }
  }

  egress_security_rules {

    protocol         = 6
    destination_type = "CIDR_BLOCK"
    destination      = "0.0.0.0/0"
    description      = "access to container registries via HTTPS"
    tcp_options {
      min = 443
      max = 443
    }
  }

  freeform_tags = {
    "project-name" = "blogpost"
  }

}

#
# A subnet for the Container Instance referencing the previously created security list
#

resource "oci_core_subnet" "demo_subnet" {

  cidr_block     = "10.0.0.0/24"
  compartment_id = var.compartment_ocid
  vcn_id         = oci_core_vcn.demo_vcn.id
  display_name   = "demo vcn - container instance (public) subnet"
  dns_label      = "containers"
  security_list_ids = [
    oci_core_security_list.public_sn_sl.id
  ]
  route_table_id = oci_core_route_table.demo_igw_rt.id
  freeform_tags = {
    "project-name" = "blogpost"
  }
}

#
# Internet Gateway & Route Table
#

resource "oci_core_internet_gateway" "demo_igw" {

  compartment_id = var.compartment_ocid
  vcn_id         = oci_core_vcn.demo_vcn.id
  display_name   = "demo-vcn - Internet gateway"
  enabled        = true
}

resource "oci_core_route_table" "demo_igw_rt" {

  compartment_id = var.compartment_ocid
  vcn_id         = oci_core_vcn.demo_vcn.id
  display_name   = "demo vcn - Internet gateway route table"
  route_rules {

    network_entity_id = oci_core_internet_gateway.demo_igw.id
    destination       = "0.0.0.0/0"
  }

  freeform_tags = {
    "project-name" = "blogpost"
  }
}

Creating the Container Instance

With the VCN in place it is time to create the Container Instance. The first step is to get the name of the local Availability Domains (ADs). With that at hand you can create the OCI Container Instance. Again, this is the minimum viable example, further blog posts explain how to pull images from private registries, how to limit resource usage, use volumes and many advanced topics more

# ------------------------------------------------------------------------------------------------ container instance

data "oci_identity_availability_domains" "local_ads" {

  compartment_id = var.compartment_ocid
}

resource "oci_container_instances_container_instance" "demo_container_instance" {

  # create the container instance in AD1
  availability_domain      = data.oci_identity_availability_domains.local_ads.availability_domains.0.name
  compartment_id           = var.compartment_ocid
  freeform_tags            = { "project-name" = "blogpost" }
  display_name             = "demo container instance"
  container_restart_policy = "ALWAYS"
  shape                    = "CI.Standard.E4.Flex"
  shape_config {

    memory_in_gbs = 4
    ocpus         = 1
  }

  vnics {

    subnet_id             = oci_core_subnet.demo_subnet.id
    display_name          = "demo-container-instance"
    is_public_ip_assigned = true
    nsg_ids               = []
  }

  containers {

    image_url    = "httpd:2.4"
    display_name = "demo apache http server container"
  }
}

The OCI Container Instance is created in the first AD using the E4 Flex shape, using just 1 Oracle CPU (OCPU) and 4 GB of memory. Inside the VNIC block the Container Instance is assigned a public IP address. The container image to be used is defined in the containers {} block. The public HTTPd aka Apache Web Server with its default configuration is referenced from the public Docker registry

Summary

OCI Container Instances are remarkably quick to start and easy to use in scenarios where a full Kubernetes Installation isn’t needed. If all you need to do is just run a container the Container Instance might just be the right solution