Using OCI devops and terraform to synchronize an OCI bucket with a git repository

March 17, 2023 | 10 minute read
Text Size 100%:


Sometimes, it is useful to be able to synchronize the contents of a git repository with an OCI bucket. A common use case is if you're hosting a static website in an OCI bucket, and you want to have automation manage that bucket for you. This blog post will demonstrate using OCI devops, and a small bit of terraform to manage the contents of an OCI storage bucket. Note that there are a couple of issues that you'll encounter and I'll discuss them in this post, with solutions.

In this post, we're going to use an OCI Devops project to host a git repository, which will contain a terraform script and build.yaml file to deploy a git subdirectory to an OCI bucket automatically.

Why use terraform and git?

Using terraform and git allows for easy managed maintenance of a bucket's content. This can be useful for many purposes, e.g. if you're hosting a static website in a bucket and want to keep the website's code in a git repostory.

  • Terraform allows for easy identification of appropriate Content-Type markings, so images and JavaScript files are properly identified to a browser from the object, automatically.
  • Terraform also remembers the bucket state, so content is fully managed, including object removals and other changes.
  • Git allows for standard developer environment integration, or using in-browser git editing environments, such as the one embedded in OCI Devops.

Key Components

  • OCI Devops: This demonstration includes a build profile for an OCI Devops project. We will also be using a git repository in the same project, as well as a build trigger.
  • Terraform: Terraform is used to manage the lifecycle of the bucket. Note that we will be using terraform remote_state to store the terraform state in another bucket. This is standard terraform practice.
  • OCI Object Storage: The ultimate target of the code.
  • OCI Vault: we need to store a couple of secret artifacts, so we'll put them in a vault.

Problems and how to solve them

The shallow git clone

The basic premise is very simple - we push some code to git, a trigger invokes a build process, the build process deploys the bucket contents. However, git, when used in a build system, such as OCI DevOps, utilizes what's known as a "shallow" clone of the git repository. This shallow clone contains no history, which means that the contents of the filesystem are not properly timestamped, and you can't recover them from the git log, as that's not fully present in a shallow clone. We need the file timestamps to allow the tracking of files to work - without tracking the files, terraform considers every commit to have changed every file, which is a significant inefficiency, and could have security risks, depending on context. To solve this problem, we use a couple of git tricks. Save this snippet in a bash file and mark the file as executable.

git fetch --unshallow

for f in $(git ls-files) ; do touch --date=@$(git log -1 --date=unix --format='%cd' "$f") "$f"; done

This snippet does two things: it "unshallows" the git repository (essentially "filling in" the missing history) and it lists all files in the commit, and touches them with the date of the commit where they were last modified. This gives a completely consistent view of the timestamps of the checked out code, which means the terraform state tracking can reliably determine a minimal change set.

Running terraform on OCI devops

Running terraform on OCI devops is quite straightforward, however,you need to persist the terraform state elsewhere, as the OCI devops instance will not retain that state long term for you. This can easily be accomplished using terraform's remote_state feature. OCI buckets can be used to persist remote state, and we'll set up secrets in a vault to store the remote state access keys. Save this as

terraform {
    backend "s3" {
        bucket   = "terraform"
        key      = "[project]/terraform.tfstate"
        region   = "us-ashburn-1"
        endpoint = "[endpointurl]"
        skip_region_validation      = true
        skip_credentials_validation = true
        skip_metadata_api_check     = true
        force_path_style            = true
        workspace_key_prefix = "[project]_workspaces"

# this requires `terraform init -backend-config="access_key=[customersecretid]" -backend-config="secret_key=[customersecretvalue]"` to be run to init the backend. A one off config

This terraform snippet sets up the remote state backend for terraform to talk to OCI. We will inject the access_key and secret_key from OCI vault secret values and use the OCI devops integration to include them in the build. The [project] and [endpointurl] are both values that depend on the specifics of the project. The endpoint URL is the access to OCI object storage. You can usually find it in the "Buckets" page on the OCI console for your tenancy. You will probably want to tweak the region as well.

A second issue you may encounter (follow the bug here for more information) is that OCI devops, when using the resource principal (which you should be using), is unable to target regions other than the region it is running on. This can be worked around fairly easily, by using a terraform wrapper script that forces the OCI_RESOURCE_PRINCIPAL_REGION environment variable. Save this file as and mark this file as executable.


export OCI_RESOURCE_PRINCIPAL_REGION=$(if grep -q "PROD" .terraform/environment; then echo "us-ashburn-1"; else echo "us-phoenix-1"; fi)
terraform "$@"

This script reads the environment from the .terraform. It presumes a good convention in how you use terraform, namely that you have a workspace targetting each of your primary deployment environments. (Personally, I target each environment to a different region as well, which is why I require this).

The OCI Devops project

Now, finally, we come to the build_spec.yaml script. This specifies the build steps to get your project to build and deploy to the TARGET_ENV specified.

version: 0.1
component: build
timeoutInSeconds: 10000
shell: bash
failImmediatelyOnError: true

    - type: Command
        name: Init terraform
        command: terraform init -no-color -backend-config="access_key=${ACCESS_KEY}" -backend-config="secret_key=${SECRET_KEY}"
    - type: Command
        name: Select workspace
        command: terraform workspace select ${TARGET_ENV} -no-color
    - type: Command
        name: Refresh terraform
        command: ./ refresh -no-color
    - type: Command
        name: Fix git dates
        command: ./
    - type: Command
        name: Apply terraform
        command: ./ apply -no-color -auto-approve  

This runs 5 steps.

  1. We init the terraform using the remote_state discussed earlier. You'll need to have setup a bucket as directed in the terraform remote_state information .
  2. We select the appropriate workspace for the environment we're targetting. If you're following good practice, you'll have a test and a production environment, and you can target each using a different terraform workspace.
  3. We refresh the terraform state, in case things have changed outside of terraform's control. This is usually a good idea.
  4. We run the date fixing command, to ensure the git clone has valid dates from the repository.
  5. We apply the changes from terraform. This will update the bucket.

The Terraform

The core terraform script, which does the magic of syncing a local directory (src, here - see the ${path.module}/src base_dir) to an OCI bucket ([bucket_name] here, which we should configure). We use the template_files pre-packaged terraform module, as that has the capability to identify common file types and attach appropriate Content-Type tags to the files. This can be useful when using a bucket as a static host.

We also include the oci provider, with a region specified based on the workspace, as discussed earlier. This may or may not be relevant for your usecase. We'll save this file as

provider "oci" {
    auth = "ResourcePrincipal"
    region = (terraform.workspace == "PROD") ? "us-ashburn-1" : "us-phoenix-1"

module "template_files" {
    source = "hashicorp/dir/template"
    base_dir = "${path.module}/src"
data "oci_objectstorage_namespace" "os_namespace" {

resource "oci_objectstorage_object" "objects" {
    for_each = module.template_files.files
    namespace = data.oci_objectstorage_namespace.os_namespace.namespace
    bucket = "[bucket_name]"
    object = each.key
    content_type = each.value.content_type
    source = abspath(each.value.source_path)

Putting it all together

To put this demo together, we need to create a terraform state bucket, a couple of secrets in a vault to store access keys to said bucket, an OCI devops project, and then populate it with the appropriate elements.

  1. Create a bucket to store the terraform state (this can easily be shared across multiple terraform projects, as long as the [project] prefix discussed above is distinct for each one). Note: this should not be the bucket you are targeting for the synchronization. Use a separate dedicated bucket for terraform state.
  2. Create an ACCESS_KEY and PRIVATE_KEY for the terraform remote_state by creating a "Customer Secret Key". Guidance on working with Customer Secret Keys.
  3. Create a Vault and populate TWO secrets with the ACCESS_KEY and PRIVATE_KEY respectively. You'll need to populate the respective secret OCIDs in the build_spec.yaml file.
  4. Create a new Notifications Topic and Devops project, using the notifications topic. You'll need to enable logging as well.
    Creating a Topic in OCI
    Creating an OCI Topic
    Creating an OCI DevOps project
    Creating an OCI DevOps Project
  5. Create a GIT repository in the devops project. You'll need to configure access to your repository, using either SSH or HTTPS.
    Creating a git repository
    Create a Git Repository
  6. Create a build pipeline, with one stage - managed build. You will need to configure a policy as described in the policy guidance. Select the code repository you previously created. Add a parameter to the build pipeline "TARGET_ENV" with default value "STAGE".
    Use Managed Build as the build stage type
    Select Managed Build as the build stage type
    Configuring the build stage
    Configure the build stage
    Selecting the git repository
    Selecting the git repository
    Adding a parameter to the build
    Adding a parameter to the build pipeline
  7. Create a build trigger that triggers the build pipeline on commit of the GIT code repository. Add an action, associated with the main branch and the pipeline above, that triggers on Push.
    Creating a trigger
    Creating a build trigger
    Adding an action to the trigger
    Adding and configuring the build trigger action
    The final trigger configuration
    The final trigger state
  8. Commit the code you want to sync to the GIT repository, and include the files above in the root directory, as well as the bucket contents in a src subdirectory. You should have the build_spec.yaml file, the file, the file, the file, the file, as well as a src subdirectory containing the files you want in your target bucket.
  9. The build should trigger, and you should see the terraform run and synchronize the bucket contents from the GIT repository into the desired bucket.


This is just a short overview of what I consider to be a surprisingly useful process. Many modifications and tweaks can be added to this process. For example, I have added in a download step to include some distributed binary files as part of a bucket, rather than storing them in the GIT repository. You can use the same basic process to do other terraform maintenance activities such as synchronizing OCI function builds, or maintaining synchronization of Resource Manager templates and stacks generated by said templates.

Christian Weeks


Previous Post

OCI Break-Glass , Part Four

Gordon Trevorrow | 16 min read

Next Post

OCI Networking Best Practices - Part Three - OCI Network Connectivity

Ben Woltz | 8 min read