Skip to content
DevOps

How to set up a CI/CD pipeline for AWS Lambda?

Applying DevOps principles to shorten time to production for Serverless applications.

Grégoire Mielle
by Grégoire MielleLast updated on 5/27/2022
Table of content
💡

This article explores what is and how to set up a continuous integration & deployment pipeline for serverless applications and develops an example using AWS Lambda, Golang, Terraform & GitHub Actions. It does not go into the details of each technology. A GitHub repository containing all the source code used in this article is available.

A continuous integration & deployment pipeline (CI/CD) is one of the most impactful ways to increase the velocity of a team: no one wants you to spend hours repetitively entering commands to see your changes in production. Moreover, it gives autonomy & serenity to team members: bugs will be fixed quickly, changes will be released often.

Serverless –and especially AWS Lambda– is no exception to the rule. Not managing servers and serving thousands of requests without moving a finger is great, being able to release often is as important. However, choosing a stack of tools for this job can quickly become a headache.

Questions related to setting up an AWS Lambda CI/CD pipeline:

  • How configurable should it be?
    • How to manage more than a lambda function? (eg. Database, S3 bucket, Cognito pool...)
    • Should the configuration be abstracted so that other teams can use it?
  • How easy is it to set up?
    • Does it take a week to deploy a lambda function?
    • Is there any strong defaults to follow to avoid mistakes?
  • Can it be integrated with tools I already use?
    • Can I still use Github Actions to build my code?

The CI/CD landscape for AWS Lambda & serverless applications

DevOps practices have become ubiquitous in the Cloud world. Fortunately, they follow the same steps whatever compute model is used, serverless or not.

Defining your infrastructure using code

Infrastructure as code (IaC) let’s you define & automate operations around your application’s infrastructure. It allows you to manage serverless functions but also databases, API gateways or S3 buckets. While you could create these resources using a web interface or CLI commands, IaC has a lot of benefits:

Benefits of infrastructure-as-code

  • Can use version control to track changes made to infrastructure
  • Can use pull requests to manage what changes are accepted and go live
  • Easy to replicate to build a staging, testing or QA environment from production

When it comes to AWS Lambda, you have to choose from two solution categories: AWS powered or not.

AWS powered

  • AWS CloudFormation: AWS solution to describe, deploy & manage cloud resources.
  • AWS SAM: Superset of CloudFormation in the context of serverless applications

Not AWS powered

  • Serverless framework: Cloud provider-agnostic solution for serverless applications
  • Terraform: Cloud provider-agnostic solution for all things related to infrastructure

As a general advice, Terraform is the way to go as it’s cloud-provider agnostic, open source & multi-purpose. Indeed, you can use it anywhere (AWS, Azure, GCP), for any infrastructure (you can even write custom plugins) while AWS-centered solutions are by definition only for AWS.

Building, testing, packaging your code

With tools like GitHub Actions, CircleCI or AWS Codebuild, you can trigger workflows when new code is pushed to your repository to build, test and even package your code.

These tools rely on the same abstractions (workflows & jobs). Your choice mainly depends on solutions you already know or use.

AWS Lambda has its specificities when it comes to package code and offers two options:

  • Zip archives (50mb max.): it contains your source code organized in a specific way depending on your language. It will be uploaded to AWS S3 to be used by your serverless function
  • Docker images (10gb max.): it must be based on an AWS Lambda-compatible image. It will be uploaded to AWS ECR to be used and can be tested locally using a runtime interface emulator

Your choice will be made mainly based on the size of your source code & how familiar you are with Docker-based workflows.

Deploying your application

The final step of a continuous deployment is to... deploy your application. Once you packaged & published a new version of your lambda function, you want to use it in your environments (production, staging, testing, etc.).

A new deployment is always a stressful moment:

  • Can issues be detected quickly when they arise?
  • Can the deployment be rollbacked quickly?
  • Can the application be released gradually to new users instead of all at once?

When it comes to AWS Lambda, you use aliases & versions:

  • Version: a specific version of your function published after updating its configuration
  • Alias: a pointer to a specific version of your function. For example, a production alias pointing to version 2 whereas the staging alias is pointing to $latest

You can manually deploy new versions of your function by updating an alias in a CI pipeline or through the AWS console, or use a solution like AWS CodeDeploy.

AWS CodeDeploy automates alias updates and allows you to do traffic shifting (also known as blue/green deployment). However, AWS CodeDeploy is pretty hard to use if you’re not already using AWS SAM (mentioned above). Moreover, traffic shifting can also be done manually through the AWS console.

A CI/CD pipeline for AWS Lambda using Terraform, Github Actions & AWS CLI

Defining your infrastructure with Terraform

Terraform helps you define all the infrastructure building blocks that you need to make your application work.

Here are the steps to follow to use Terraform for your project:

  • Install the Terraform CLI on your machine
  • Create an AWS S3 bucket that will be used to store Terraform’s state
  • Create an AWS IAM user with programmatic access & AdministatorAccess policy
  • Create a terraform folder in your project with a main.tf file

When using Terraform, you use the HashiCorp Configuration Language (HCL) with blocks & arguments to manage your infrastructure.

resource "aws_instance" "example" {
	property_one = "one"

	settings {
		enabled = true
	}
}

If you want to dive deeper into HCL, you can visit this page on Terraform’s website.

You first need to configure Terraform for your project by specifying which version to use, where to store the infrastructure state (persistance layer) & which providers to depend on (eg. AWS, visit this page to see all providers).

terraform {
	// Ensures that everyone is using a specific Terraform version
	required_version = ">= 0.14.9"

  // Where Terraform stores its state to keep track of the resources it manages
	backend "s3" {
    bucket = "example-terrform-state"
    key    = "terraform.tfstate"
    region = "us-east-1"
  }

  // Declares providers, so that Terraform can install and use them
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.27"
    }

    archive = {
      source  = "hashicorp/archive"
      version = "~> 2.2.0"
    }
  }
}

// Specify settings for a given required provider
provider "aws" {
  region = "us-east-1"
}

Make sure to set up your AWS credentials to be used by Terraform using one of the techniques described on this page. Using terraform init in the same directory, you prepare your working directory for other commands.

Now that Terraform is set up, let’s create the resources you need for your project. In an api.tf in the same folder, you’re gonna create:

  • Everything you need to package & store your code
  • Your lambda function & everything it needs to run
  • An API gateway to make your lambda function available on the internet
/*
 * Data blocks request that Terraform read from a given data source
 * and export the result under the given local
 * Here we're creating a zip archive to be used below
 */
data "archive_file" "api_code_archive" {
  type        = "zip"
  source_file = "${path.root}/../bootstrap"
  output_path = "${path.root}/../bootstrap.zip"
}

// S3 bucket in which we're gonna release our versioned zip archives
resource "aws_s3_bucket" "api_bucket" {
  bucket        = "example-api-bucket"
  force_destroy = true
}

/*
 * The first archive is uploaded through Terraform
 * The following ones will be uploaded by our CI/CD. pipeline in GitHub actions
 */
resource "aws_s3_bucket_object" "api_code_archive" {
  bucket = aws_s3_bucket.api_bucket.id
  key    = "bootstrap.zip"
  source = data.archive_file.api_code_archive.output_path
  etag   = filemd5(data.archive_file.api_code_archive.output_path)

  lifecycle {
    ignore_changes = [
      etag,
      version_id
    ]
  }
}

You define an AWS S3 bucket in which a new zip archive will be uploaded each time you want to release a new version of your application. While the first version of this archive will be uploaded using Terraform, the following ones will be uploaded through your CI/CD pipeline with GitHub Actions.

resource "aws_lambda_function" "api_lambda" {
  function_name    = "example-api"
  role             = aws_iam_role.api_lambda_role.arn
  s3_bucket        = aws_s3_bucket.api_bucket.id
  s3_key           = aws_s3_bucket_object.api_code_archive.key
  source_code_hash = data.archive_file.api_code_archive.output_base64sha256
	/*
   * Architecture, runtime & handler might differ
	 * depending on the programming language you use
   */
  architectures    = ["arm64"]
  runtime          = "provided.al2"
  handler          = "bootstrap"
  memory_size      = 128
  publish          = true

  lifecycle {
    ignore_changes = [
      last_modified,
      source_code_hash,
      version,
      environment
    ]
  }
}

/*
 * An alias allows us to point our
 * API gateway to a stable version of our function
 * which we can update as we want
 */
resource "aws_lambda_alias" "api_lambda_alias" {
  name             = "production"
  function_name    = aws_lambda_function.api_lambda.arn
  function_version = "$LATEST"

  lifecycle {
    ignore_changes = [
      function_version
    ]
  }
}

resource "aws_cloudwatch_log_group" "api_lambda_log_group" {
  name              = "/aws/lambda/${aws_lambda_function.api_lambda.function_name}"
  retention_in_days = 14
  tags              = {}
}

resource "aws_iam_role" "api_lambda_role" {
  name = "example-lambda-role"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [{
      Action = "sts:AssumeRole"
      Effect = "Allow"
      Sid    = ""
      Principal = {
        Service = "lambda.amazonaws.com"
      }
    }]
  })
}

/*
 * Add a policy to our role
 * to be able to push logs from our function
 */
resource "aws_iam_role_policy" "lambda_policy" {
  name = "lambda-role-policy"
  role = aws_iam_role.api_lambda_role.id

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Effect = "Allow",
        Action = [
          "logs:CreateLogGroup",
          "logs:CreateLogStream",
          "logs:PutLogEvents"
        ],
        Resource = "*"
      }
    ]
  })
}

You then define an AWS Lambda function with a Cloudwatch logs group to be able to monitor your function. A production alias is created to maintain a pointer to a stable version of your function. This is how you’ll continuously (& safely) release code to production. If you need more aliases for more environments, feel free to create them.

Finally, a role & policy attached to it are created so that your function can execute & push logs.

/*
 * An API gateway to expose our function
 * to the Internet
 */
resource "aws_apigatewayv2_api" "api_gateway" {
  name          = "example-api-gateway"
  protocol_type = "HTTP"
  tags          = {}
}

resource "aws_cloudwatch_log_group" "api_gateway_log_group" {
  name              = "/aws/api_gateway_log_group/${aws_apigatewayv2_api.api_gateway.name}"
  retention_in_days = 14
  tags              = {}
}

/*
 * Default stage for our API gateway
 * with basic access logs
 */
resource "aws_apigatewayv2_stage" "api_gateway_default_stage" {
  api_id      = aws_apigatewayv2_api.api_gateway.id
  name        = "$default"
  auto_deploy = true
  tags        = {}

  access_log_settings {
    destination_arn = aws_cloudwatch_log_group.api_gateway_log_group.arn

    format = jsonencode({
      requestId               = "$context.requestId"
      sourceIp                = "$context.identity.sourceIp"
      requestTime             = "$context.requestTime"
      protocol                = "$context.protocol"
      httpMethod              = "$context.httpMethod"
      status                  = "$context.status"
      responseLatency         = "$context.responseLatency"
      path                    = "$context.path"
      integrationErrorMessage = "$context.integrationErrorMessage"
    })
  }
}

/*
 * Integrate our function with our API gateway
 * so that they can communicate
 */
resource "aws_apigatewayv2_integration" "api_gateway_integration" {
  api_id             = aws_apigatewayv2_api.api_gateway.id
  integration_uri    = "${aws_lambda_function.api_lambda.arn}:${aws_lambda_alias.api_lambda_alias.name}"
  integration_type   = "AWS_PROXY"
  integration_method = "POST"
  request_parameters = {}
  request_templates  = {}
}

/*
 * Tell our API gateway to forward all incoming
 * requests (every path + HTTP verb) to our function
 */
resource "aws_apigatewayv2_route" "api_gateway_any_route" {
  api_id               = aws_apigatewayv2_api.api_gateway.id
  route_key            = "ANY /{proxy+}"
  target               = "integrations/${aws_apigatewayv2_integration.api_gateway_integration.id}"
  authorization_scopes = []
  request_models       = {}
}

/*
 * Allow our API gateway to invoke our function
 */
resource "aws_lambda_permission" "api_gateway_lambda_permission" {
  principal     = "apigateway.amazonaws.com"
  statement_id  = "AllowExecutionFromAPIGateway"
  action        = "lambda:InvokeFunction"
  function_name = aws_lambda_function.api_lambda.function_name
  qualifier     = aws_lambda_alias.api_lambda_alias.name
  source_arn    = "${aws_apigatewayv2_api.api_gateway.execution_arn}/*/*"
}

/*
 * Tell Terraform to output the URL
 * of our default API gateway stage after each `terraform apply`
 */
output "api_gateway_invoke_url" {
  description = "API gateway default stage invokation URL"
  value       = aws_apigatewayv2_stage.api_gateway_default_stage.invoke_url
}

The last part of your infrastructure resources relate to the API gateway so that the function is exposed on the Internet. The API gateway is integrated with the Lambda function in a way that logs every access (IP address, duration of request, etc.) & sends all requests (All paths & HTTP verbs) to the function.

With terraform plan, you can preview all changes that will be applied by Terraform. Using terraform apply, these changes are performed on your AWS account.

Building, testing & packaging your code using GitHub Actions & AWS CLI

Now that your infrastructure is set up, you need to make sure that changes made to your code are safe to be deployed in production. Using GitHub Actions, you can define a workflow which will be executed based on events happening on your repository. This is your continuous integration (CI) pipeline.

You’re gonna set up a workflow that will run on every push & pull request to build, lint & test your code. This example uses Golang and will need to be adapted if you’re using a different programming language.

Let’s create a .github/workflows folder in your project & create a ci.yaml file.

name: Continuous integration

on: [push, pull_request]

jobs:
  all:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2

      - name: Set up Go
        uses: actions/setup-go@v2
        with:
          go-version: 1.17

      - name: Verify dependencies
        run: go mod verify

      - name: Build
        run: make build

      - name: Run go vet
        run: go vet ./...

      - name: Install golint
        run: go install golang.org/x/lint/golint@latest

      - name: Run golint
        run: golint ./...

      - name: Run tests
        run: go test ./...

Once pushed, each workflow results will be displayed on the checks section of pull requests & next to each commit (Green check if all passed, red cross if errors).

You’re gonna create a second workflow which will only run on your main branch when your first workflow runs successfully to package our code & push a new archive + function version. This is your continuous deployment (CD) pipeline.

name: Continuous deployment

on:
  workflow_run:
    workflows: ['Continuous integration']
    branches: [master]
    types:
      - completed

jobs:
  deploy:
    runs-on: ubuntu-latest
    if: ${{ github.event.workflow_run.conclusion == 'success' }}
    steps:
      - uses: actions/checkout@v2

      - name: Set up Go
        uses: actions/setup-go@v2
        with:
          go-version: 1.17

      - name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v1
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: us-east-1

      - name: Build
        run: make build

      - name: ZIP build
        run: zip ${{ github.run_id }}.zip bootstrap

      - name: Upload to S3
        run: aws s3 cp ${{ github.run_id }}.zip s3://example-api-bucket/${{ github.run_id }}.zip

      - name: Update lambda function code
        run: aws lambda update-function-code --function-name example-api --s3-bucket example-api-bucket --s3-key ${{ github.run_id }}.zip

      - name: Sleep for 5 seconds
        run: sleep 5s
        shell: bash

      - name: Release lambda function version
        run: aws lambda publish-version --function-name example-api --description ${{ github.run_id }}

Make sure to set up repository secrets for AWS_ACCESS_KEY_ID & AWS_SECRET_ACCESS_KEY in order to use the AWS CLI to interact with S3 & Lambda.

Deploying the application using the AWS console

To deploy one of the new versions of your function created at the end of our CD pipeline, you need to update the production alias to point to one of these versions.

Since this is a sensitive operation, it can be done manually in the AWS console. In advanced CD pipelines, this step could be automated with traffic shifting & health checks. Go to the “Aliases” section of your AWS Lambda function and edit the production one to point to a new version.

You could even do traffic shifting using the “Weighted alias” feature. Since AWS automatically defines environment variables like AWS_LAMBDA_FUNCTION_VERSION , you could add it to all your logs & see if your new function version generates more errors or increases latency compared to the previous one.

Wrapping up

In this article, you’ve seen how to set up a CI/CD pipeline for a serverless application using Golang, AWS Lambda, Terraform & GitHub Actions.

CI/CD pipelines are an important part of DevOps practices to shorten the delivery of software applications. If you want to learn more about monitoring and the differences between Logging, Tracing & Profiling to ensure reliable releases, feel free to check out this article.

Want to ship localized content faster
without loosing developers time?

Recontent helps product teams collaborate on content for web & mobile apps.
No more hardcoded keys, outdated text or misleading translations.