How to Deploy Multiple Branches with Terraform and Github Actions
The Continuous Deployment (CD) environment is thriving these days. The tools we have are being actively improved, and new technologies are surfacing every day.
Today we are going to focus on two of those technologies, Terraform and Github Actions, and discuss how we can use them to implement a project that we have been wanting to tackle for some time in Acid Tango. We will implement a CD pipeline that will deploy a new environment for every feature branch of a project.
The motivation for doing this is that in the past we’ve deployed web pages with Netlify, and they offer this feature as part of their deployment. If you're interested, here's more information about Netlify and other deployment tools.
We did this with our landing page, deploying one instance of the page for each pull request (PR) created using an S3 Bucket. But at the end of this article, you should have enough information to implement a similar pipeline regardless of how you are deploying your project.
The tools
In case you are not familiar with either Terraform or Github Actions, fear not, here is a brief introduction on what these two technologies are good for. If you know them, feel free to skip to the next section where we start working.
Terraform
Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently - Terraform Docs
What Terraform enables us is to manage the infrastructure by writing and executing code that describes what resources are needed. This technique is called Infrastructure as Code (IaC).
Terraform, written internally in Go, is a declarative IaC language, this means that we just need to describe the infrastructure that we need with the language syntax, and from there, Terraform is able to detect all the necessary changes to achieve the desired infrastructure.
You can read more about how it works, the syntax and much more in Terraform's official documentation.
Github Actions
GitHub Actions help you automate your software development workflows in the same place you store code and collaborate on pull requests and issues - Github Docs
Github Actions is a relatively new feature from Github that allows developers to create CI/CD pipelines for their projects.
The main advantage of Github Actions (and Gitlab CI/CD) against other more established automation solutions like Jenkins, is that you don’t need to install or maintain additional servers to execute the pipelines. All the different subscription plans Github offers (even the free one) include varying amounts of free minutes to use their runners.
You can look for more information in the Github Actions Docs.
Defining the infrastructure
The first step we must do is define what we need to build for each instance of our page. As mentioned before, the only thing we need for our deployment is an S3 bucket. Here is an example of how we defined our resources with Terraform:
# Setting Up Remote State
terraform {
# Terraform version at the time of writing this post
required_version = ">= 0.12.24"
backend "s3" {
bucket = "example-bucket"
key = "example-key"
region = "eu-west-1"
}
}
# Terraform AWS Provider
# Docs: https://www.terraform.io/docs/providers/aws/index.html
provider "aws" {
region = var.aws_region
}
variable "aws_region" {
description = "The AWS region to create resources in."
default = "eu-west-1"
}
variable "branch_name" {
description = "The name of the branch that's being deployed"
}
resource "aws_s3_bucket" "primary" {
bucket = "acid-web-${var.branch_name}"
force_destroy = true
policy = <<POLICY
{
"Id": "bucket_policy_site",
"Version":"2012-10-17",
"Statement": [
{
"Sid": "PublicRead",
"Action": ["s3:GetObject"],
"Effect": "Allow",
"Resource": "arn:aws:s3:::acid-web-${var.branch_name}/*",
"Principal": "*"
}
]
}
POLICY
website {
index_document = "index.html"
error_document = "index.html"
}
tags = {
Name = "Acid Web - PR"
Branch = var.branch_name
Project = "Acid Web"
}
}
We’ve added all the code here for simplicity, but Terraform allows you to split this into as many files as you like, so feel free to split up your infrastructure however fits you better.
One key thing to note here is that we need to set up a Terraform backend to enable remote state. This is important as we need the state to be stored across the different executions of the CD pipeline (more on this later). Terraform supports different backend types. We decided to use an S3 backend as we are already working with AWS, but feel free to choose the one that best suits you.
Building the pipeline
The other part of the puzzle is to configure our project with Github Actions to create and destroy the resources as we need them. We want to create a new environment for each pull request we open against our project's main branch.
For this, we created two different workflows: one for creating the deployment and updating the deploy with new changes from the PR, and one for destroying the resources once the PR is merged.
Workflow to create the deployment
name: Create Test Environment
on:
pull_request:
branches:
- master
jobs:
create-infra:
name: Setup Infrastructure
runs-on: ubuntu-latest
defaults:
run:
working-directory: infrastructure/pr-module
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_REGION: 'eu-west-1'
steps:
- uses: actions/checkout@v2
- run: echo "::set-env name=BRANCH_NAME::${{ github.head_ref }}"
- run: terraform init
- run: terraform workspace select $BRANCH_NAME || terraform workspace new $BRANCH_NAME
- run: terraform apply -var="branch_name=$BRANCH_NAME" -auto-approve
deploy:
name: Deploy
runs-on: ubuntu-latest
needs: create-infra
steps:
- uses: actions/checkout@v2
- uses: actions/cache@v1
with:
path: ~/.cache/yarn
key: ${{ runner.os }}-yarn-${{ hashFiles(format('{0}{1}',
github.workspace, '/yarn.lock')) }}
restore-keys: |
${{ runner.os }}-yarn-
- name: Install dependencies
run: yarn install
- name: Generate static
run: yarn generate
- uses: jakejarvis/s3-sync-action@master
with:
args: --follow-symlinks --delete
env:
AWS_S3_BUCKET: acid-web-${{ github.head_ref }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_REGION: 'eu-west-1'
SOURCE_DIR: './dist'
- uses: chrnorm/deployment-action@releases/v1
name: Create GitHub deployment
id: deployment
with:
token: '${{ github.token }}'
target_url: http://acid-web-${{ github.head_ref }}.s3-website-eu-
west-1.amazonaws.com
environment: ${{ github.head_ref }}
initial_status: success
ref: ${{ github.event.pull_request.head.sha }}
We are instructing Github to execute the workflow on the pull_request
event that fires every time a PR is created, synchronised with the source branch (a new commit is pushed), or reopened.
Then, we define two jobs: one to execute the necessary Terraform commands and one to deploy the site once Terraform has finished applying changes.
The first job runs in the directory of the project, where you have your Terraform files defined. In our case, we are using the infrastructure/pr-module
folder.
The job first sets a ENV variable with the branch name with the ::set-env
instruction. That is the syntax provided by Github for Workflow Commands. We later use the env variable $BRANCH_NAME
in the following steps. The next step is to create a Terraform Workspace for the branch. We do this to isolate the states of multiple branches, allowing us to have several pull requests deployed with just one Terraform configuration. Note that the commands of this job must work both for when we are creating the workspace for the first time and when the workspace is already created, given that this pipeline will also get executed every time the PR is updated.
The second job is responsible for building and deploying the site. The first steps are a common way to deploy a static site to s3, first building the project (we are using yarn
for this), and then uploading the results to the appropriate bucket. For the later, we are using the jakejarvis/s3-sync-action action.
The last step of the workflow is to create a Github Deployment. That is what allows you to show a nice deployment link in the PR description page.
For this, we also used an existent Github Action. This time was chrnorm/deployment-action. Credit to the authors of both actions for doing the heavy lifting for us.
In case you’re not deploying your project to S3, you would need to change this second job to suit your specific needs.
Workflow for tearing down the deployment
Once we are done reviewing the changes and testing them in the test environment, we would merge the PR and delete it. At this point we needed to remove the resources we used to deploy the PR, and that is exactly what this second workflow is tasked to do:
name: Remove Test Environment
on:
pull_request:
branches:
- master
types: [closed]
jobs:
delete-infra:
runs-on: ubuntu-latest
defaults:
run:
working-directory: infrastructure/pr-module
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_REGION: 'eu-west-1'
steps:
- uses: actions/checkout@master
- run: echo "::set-env name=BRANCH_NAME::${{ github.head_ref }}"
- run: terraform init
- run: terraform workspace select $BRANCH_NAME
- run: terraform destroy -var="branch_name=$BRANCH_NAME" -auto-approve
- run: terraform workspace select default
- run: terraform workspace delete $BRANCH_NAME
As you can see, this workflow is more straightforward. With the closed
event type on pull_request
we could execute the action when we needed to. You can read more about the available events and types that trigger workflows.
All that was left to do was select the workspace, delete all the resources, and finally remove the workspace.
Credentials management
You need to be careful not to upload any AWS credentials to the code. We handled this in Terraform by using one of the supported authentication methods for the AWS Provider. In Github Actions, you should store the sensible information as encrypted secrets and reference them with ${{ secrets.YOUR_SECRET }}
Wrapping up
With a few configuration files and a lot of trial and error, we managed to get a working solution that allowed us to deploy all the new changes without altering our “production” version of the page. This kind of deployment can help any project to reduce the time it takes to validate and spot errors while evolving a product.
Thank you for reading! If you found this article helpful (or not), we'd love to read your feedback in the comments section below.