X

Step Up to Modern Cloud Development

Infrastructure as Code using Terraform on Oracle Developer Cloud

Abhinav Shroff
Principal Product Manager

With our April release, we have started supporting HashiCorp Terraform builds in Oracle Developer Cloud. This blog will help you understand how you can use HashiCorp Terraform in build pipeline to provision Oracle Cloud Infrastructure as part of the build pipeline automation. 

 

Tools and Platforms Used

Below are the tools and cloud platforms I use for this blog:

Oracle Developer Cloud Service: The DevOps platform to build your CI & CD pipeline.

Oracle Cloud Infrastructure: IaaS platform where we would provision the infrastructure for our usage.

Terraform: Tool for provisioning the infrastructure on cloud. We would be doing for Oracle Cloud Infrastructure or OCI it is popularly known as. For this blog I would be using OCI here on.

 

About HashiCorp Terraform

HashiCorp Terraform is a tool which helps you to write, plan and create your infrastructure safely and efficiently. It can manage existing and popular service providers like Oracle, as well as custom in-house solutions. Configuration files describe to HashiCorp Terraform the components needed to run a single application or your entire datacenter. It helps you to build, manage and version your code. To know more about HashiCorp Terraform go to: https://www.terraform.io/

 

Terraform Scripts

To execute the Terraform scripts on the Oracle Developer Cloud as part of the build pipeline, you need to upload all the scripts to the Git repository. To upload the scripts to the Git repository, you will need to first install the Git cli on your machine and then use the below commands to upload the code:

I was using windows machine for the script development so below is what you need to do on the command line:

Pushing Scripts to Git Repository on Oracle Developer Cloud

Command_prompt:> cd <path to the Terraform script folder>

Command_prompt:>git init

Command_prompt:>git add –all

Command_prompt:>git commit –m “<some commit message>”

Command_prompt:>git remote add origin <Developer cloud Git repository HTTPS URL>

Command_prompt:>git push origin master

Below is the folder structure description for the terraform scripts that I have in the Git Repository on Oracle Developer Cloud Service.

The terraform scripts are inside the exampleTerraform folder and the oci_api_key_public.pem and oci_api_key.pem are the OCI keys.

In the exampleTerraform folder we have all the “tf” extension files along with the env-vars file. You will be able to see the definition of the files later in the blog.

In the “userdata” folder you will have the bootstrap shell script which will be executed when the VM first boots up on OCI.

Below is the description of each file in the folder and the snippet:

env-vars: It is the most important file where we set all the environment variables which will be used by the Terraform scripts for accessing and provisioning the OCI instance.

### Authentication details
export TF_VAR_tenancy_ocid="ocid1.tenancy.oc1..aaaaaaaa"
export TF_VAR_user_ocid="ocid1.user.oc1..aaaaaaa"
export TF_VAR_fingerprint="29:b1:8b:e4:7a:92:ae:d5"
export TF_VAR_private_key_path="/home/builder/.terraform.d/oci_api_key.pem"
 
### Region
export TF_VAR_region="us-phoenix-1"
 
### Compartment ocid
export TF_VAR_compartment_ocid="ocid1.tenancy.oc1..aaaa"
 
### Public/private keys used on the instance
export TF_VAR_ssh_public_key=$(cat exampleTerraform/id_rsa.pub)
export TF_VAR_ssh_private_key=$(cat exampleTerraform/id_rsa)

Note: all the ocids above are truncated for security and brevity.

Below screenshot(s) of the OCI console shows where to locate these OCIDS:

tenancy_ocid and region

compartment_ocid:

user_ocid:

Point to the path of the RSA files for the SSH connection which are there in the Git repository and the OCI API Key private pem file in the Git repository.

variables.tf: In this file we initialize the terraform variables along with configuring the Instance Image OCID. This could be the ocid for base image available out of the box on OCI instance. These may vary based on the region where your OCI instance has been provisioned. Use this link for knowing more about the OCI base images. Here we also configure the path for the bootstrap file which resides in the userdata folder, which will be executed on boot of the OCI machine.

variable "tenancy_ocid" {}
variable "user_ocid" {}
variable "fingerprint" {}
variable "private_key_path" {}
variable "region" {}
variable "compartment_ocid" {}
variable "ssh_public_key" {}
variable "ssh_private_key" {}
# Choose an Availability Domain
variable "AD" {
    default = "1"
}
variable "InstanceShape" {
    default = "VM.Standard1.2"
}
variable "InstanceImageOCID" {
    type = "map"
    default = {
        // Oracle-provided image "Oracle-Linux-7.4-2017.12.18-0"
        // See https://docs.us-phoenix-1.oraclecloud.com/Content/Resources/Assets/OracleProvidedImageOCIDs.pdf
        us-phoenix-1 = "ocid1.image.oc1.phx.aaaaaaaa3av7orpsxid6zdpdbreagknmalnt4jge4ixi25cwxx324v6bxt5q"
        //us-ashburn-1 = "ocid1.image.oc1.iad.aaaaaaaaxrqeombwty6jyqgk3fraczdd63bv66xgfsqka4ktr7c57awr3p5a"
        //eu-frankfurt-1 = "ocid1.image.oc1.eu-frankfurt-1.aaaaaaaayxmzu6n5hsntq4wlffpb4h6qh6z3uskpbm5v3v4egqlqvwicfbyq"
    }
}
variable "DBSize" {
    default = "50" // size in GBs
}
variable "BootStrapFile" {
    default = "./userdata/bootstrap"
}

compute.tf: The display name, compartment ocid, image to be used and the shape and the network parameters need to be configured here , as shown in the code snippet below.

 

resource "oci_core_instance" "TFInstance" {
  availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[var.AD - 1],"name")}"
  compartment_id = "${var.compartment_ocid}"
  display_name = "TFInstance"
  image = "${var.InstanceImageOCID[var.region]}"
  shape = "${var.InstanceShape}"
 
  create_vnic_details {
    subnet_id = "${oci_core_subnet.ExampleSubnet.id}"
    display_name = "primaryvnic"
    assign_public_ip = true
    hostname_label = "tfexampleinstance"
  },
 
  metadata {
    ssh_authorized_keys = "${var.ssh_public_key}"
  }
 
  timeouts {
    create = "60m"
  }
}

network.tf: Here we have the Terraform script for creating VCN, Subnet, Internet Gateway and Route table. These are vital for the creation and access of the compute instance that we provision.

resource "oci_core_virtual_network" "ExampleVCN" {
  cidr_block = "10.1.0.0/16"
  compartment_id = "${var.compartment_ocid}"
  display_name = "TFExampleVCN"
  dns_label = "tfexamplevcn"
}
 
resource "oci_core_subnet" "ExampleSubnet" {
  availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[var.AD - 1],"name")}"
  cidr_block = "10.1.20.0/24"
  display_name = "TFExampleSubnet"
  dns_label = "tfexamplesubnet"
  security_list_ids = ["${oci_core_virtual_network.ExampleVCN.default_security_list_id}"]
  compartment_id = "${var.compartment_ocid}"
  vcn_id = "${oci_core_virtual_network.ExampleVCN.id}"
  route_table_id = "${oci_core_route_table.ExampleRT.id}"
  dhcp_options_id = "${oci_core_virtual_network.ExampleVCN.default_dhcp_options_id}"
}
 
resource "oci_core_internet_gateway" "ExampleIG" {
  compartment_id = "${var.compartment_ocid}"
  display_name = "TFExampleIG"
  vcn_id = "${oci_core_virtual_network.ExampleVCN.id}"
}
 
resource "oci_core_route_table" "ExampleRT" {
  compartment_id = "${var.compartment_ocid}"
  vcn_id = "${oci_core_virtual_network.ExampleVCN.id}"
  display_name = "TFExampleRouteTable"
  route_rules {
    cidr_block = "0.0.0.0/0"
    network_entity_id = "${oci_core_internet_gateway.ExampleIG.id}"
  }
}

block.tf: The below script defines the boot volumes for the compute instance getting provisioned.

resource "oci_core_volume" "TFBlock0" {
  availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[var.AD - 1],"name")}"
  compartment_id = "${var.compartment_ocid}"
  display_name = "TFBlock0"
  size_in_gbs = "${var.DBSize}"
}
 
resource "oci_core_volume_attachment" "TFBlock0Attach" {
    attachment_type = "iscsi"
    compartment_id = "${var.compartment_ocid}"
    instance_id = "${oci_core_instance.TFInstance.id}"
    volume_id = "${oci_core_volume.TFBlock0.id}"
}

provider.tf: In the provider script the OCI details are set.

 

provider "oci" {
  tenancy_ocid = "${var.tenancy_ocid}"
  user_ocid = "${var.user_ocid}"
  fingerprint = "${var.fingerprint}"
  private_key_path = "${var.private_key_path}"
  region = "${var.region}"
  disable_auto_retries = "true"
}

datasources.tf: Defines the data sources used in the configuration

# Gets a list of Availability Domains
data "oci_identity_availability_domains" "ADs" {
    compartment_id = "${var.tenancy_ocid}"
}
 
# Gets a list of vNIC attachments on the instance
data "oci_core_vnic_attachments" "InstanceVnics" {
    compartment_id = "${var.compartment_ocid}"
    availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[var.AD - 1],"name")}"
    instance_id = "${oci_core_instance.TFInstance.id}"
} 
 
# Gets the OCID of the first (default) vNIC
data "oci_core_vnic" "InstanceVnic" {
    vnic_id = "${lookup(data.oci_core_vnic_attachments.InstanceVnics.vnic_attachments[0],"vnic_id")}"
}

outputs.tf: It defines the output of the configuration, which is public and private IP of the provisioned instance.

# Output the private and public IPs of the instance
 
output "InstancePrivateIP" {
value = ["${data.oci_core_vnic.InstanceVnic.private_ip_address}"]
}
 
output "InstancePublicIP" {
value = ["${data.oci_core_vnic.InstanceVnic.public_ip_address}"]
}

remote-exec.tf: Uses a null_resource, remote-exec and depends on to execute a command on the instance.

resource "null_resource" "remote-exec" {
    depends_on = ["oci_core_instance.TFInstance","oci_core_volume_attachment.TFBlock0Attach"]
    provisioner "remote-exec" {
      connection {
        agent = false
        timeout = "30m"
        host = "${data.oci_core_vnic.InstanceVnic.public_ip_address}"
        user = "ubuntu"
        private_key = "${var.ssh_private_key}"
    }
      inline = [
        "touch ~/IMadeAFile.Right.Here",
        "sudo iscsiadm -m node -o new -T ${oci_core_volume_attachment.TFBlock0Attach.iqn} -p ${oci_core_volume_attachment.TFBlock0Attach.ipv4}:${oci_core_volume_attachment.TFBlock0Attach.port}",
        "sudo iscsiadm -m node -o update -T ${oci_core_volume_attachment.TFBlock0Attach.iqn} -n node.startup -v automatic",
        "echo sudo iscsiadm -m node -T ${oci_core_volume_attachment.TFBlock0Attach.iqn} -p ${oci_core_volume_attachment.TFBlock0Attach.ipv4}:${oci_core_volume_attachment.TFBlock0Attach.port} -l >> ~/.bashrc"
      ]
    }
}

Oracle Infrastructure Cloud - Configuration

The major configuration that need to be done on OCI is for the security for Terraform to be able work and provision an instance.

Click the username on top of the Oracle Cloud Infrastructure console, you will see a drop down, select User Settings from it.

Now click on the “Add Public Key” button, to get the dialog where you can copy paste the oci_api_key.pem(the key) in it and click on the Add button.

Note: Please refer to the links below for details on OCI key.

https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm

https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm#How

https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm#How3

 

Configuring the Build VM

Click on the user drop down on the right hand top of the page. Select “Organization” from the menu.

Click on the VM Templates tab and then on the “New Template” button. Give a template name of your choice and select the platform as “Oracle Linux 7”.

On creation of the template click on “Configure Software” button.

Select Terraform from the list of software bundles available for configuration and click on the + sign to add it to the template.

Then click on “Done” to complete the Software configuration.

Click on the Virtual Machines tab, then click on “+New VM” button and enter the number of VM you want to create and select the VM Template you just created, which would be “terraformTemplate” for our blog.

Build Job Configuration

As part of the build configuration, add Git from the “Add Source Control” dropdown. And now select the repository and the branch that you have selected. You may select the checkbox to configure automatic build trigger on SCM commits.

Select the Unix Shell Builder form the Add Builder dropdown. Then add the script as below. The below script would first configure the environment variables using env-vars. Then copy the oci_api_key.pem and oci_api_key_public.pem to the specified directory. Then execute the Terraform commands to provision the OCI instance. The important commands are terraform init, terraform plan and terraform apply.

terraform init – The terraform init command is used to initialize a working directory containing Terraform configuration files. This is the first command that should be run after writing a new Terraform configuration or cloning an existing one from version control. It is safe to run this command multiple times.

terraform plan – The terraform plan command is used to create an execution plan. 

terraform apply – The terraform apply command is used to apply the changes required to reach the desired state of the configuration, or the pre-determined set of actions generated by a terraform plan execution plan.

Post the execution it prints the IP addresses of the provisioned instance as output. And then tries to make a SSH connection to the machine using the RSA keys supplied in the exampleTerraform folder.

Configure Artifact Archiver to archive the terraform.tfstate file which would get generated as part of the build execution. You may select the compression to GZIP or NONE.

Post Build Job Execution

In build log you will be able to see the private and public IP addresses for the instance provisioned by Terraform scripts and then try to make an SSH connection to it. If everything goes fine, you the build job should complete successfully. 

Now you can go to the Oracle Cloud Infrastructure console to see the instance has already being created for you along with network and boot volumes as defined in the Terraform scripts.  

So now you can easily automate provisioning of Oracle Cloud Infrastructure using Terraform as part of your continuous integration & continuous delivery pipeline on Oracle Developer Cloud.

Happy Coding!

 **The views expressed in this post are my own and do not necessarily reflect the views of Oracle

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.Captcha

Recent Content