AWS Serverless CI/CD with GitHub Actions
AWS and GitHub are great services for managing and deploying cloud applications but sometimes getting the code from GitHub to AWS is not as straight forward as we would like it to be. Sure, there are some tools out there that bridge the gap (CircleCi, Jenkins, many others) but that means learning another framework, and hoping that everything is tightly integrated and that you remember how it all works when something breaks half a year after you set it all up. GitHub Actions can help simplify this though. Actions are easily configured workflows that are triggered when you do something with git. For example, you can set up an action that automatically test your code every time you push it to github. You can then extend that workflow so that it builds your code in a test environment when you create a pull request. This allows another member of the team to easily access the running code while reviewing the source code, simplifying the code review process as you get that warm, cozy feeling knowing that the code you are reviewing does what you expect it to do.
In this post, Iām going to walk through the steps of setting up a CI/CD pipeline inside GitHub to manage Lambda Layers builds.
Coding Workflows
YAML is used to configure the workflows and complete reference of the workflow syntax use by GitHub Actions can be found here. Fortunately, both YAML and the workflow syntax are pretty easy to work with. Start out by creating a new branch in your repo. The workflows are going to go in the .github/workflows directory in the root of your repo. You can have as many workflows as you like in here as long as each file has a .yml or .yaml extension. Iām going to focus on creating a workflow for building a distribution for AWS Lambda Layers, a full description of how to build workflows for other things can be found here. In the .github/workflows directory, create a file called awsLayers.yaml - you can give it any name you like as long as you have the .yml or .yaml extension. Hereās my file:
# This workflow will install dependencies and create a build suitable
# to be used in an AWS Lambda Layer. The build will then be uploaded
# to S3 and then can be accessed from any lambda that uses the layer.
#
# This build is only for dev builds. Releases will be built from a
# seperate action.
#
# A new version of the layer will be created for every branch when a
# pull request is intitiated. This allows us to test the layer in a
# dev environment on AWS BEFORE the code is merged into main.
name: Build Lambda Layer
on:
pull_request:
branches: [ main ]
jobs:
deploy:
name: Upload Layer to AWS Lambda
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
# For more info: https://github.com/aws-actions/configure-aws-credentials
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-2
- name: Setup Python 3.8
uses: actions/setup-python@v1
with:
python-version: 3.8
- name: Zip it all up and upload to S3
env:
release_bucket:
release_bucket_uri:
release_id: ${{ format('SomeId-{0}-dev.zip', github.head_ref )}}
release_layer: ${{ format('SomeId-{0}-dev', github.head_ref )}}
run: |
mkdir python
# assuming your requirements file is in 'requirements/prod.txt'
pip install -r requirements/prod.txt -t python
pip install . -t python
echo building release $release_id
# zip it up
zip --quiet -r $release_id python
# copy the file to S3 and install it in lambda layers
aws s3 cp $release_id $release_bucket_uri
aws lambda publish-layer-version --layer-name $release_layer --content S3Bucket=$release_bucket,S3Key=$release_id --compatible-runtimes python3.8
Let’s walk through all of thisā¦
Here’s the name of the workflow:
name: Build Lambda Layer
This next bit is telling github that the workflow should be run on pull_requests on the main branch (i.e., code you are merging into main).
on:
pull_request:
branches[main]
Now, if you have a more complex git flow going on, you may not be merging into main - so don’t just blindly cut and paste this code.
Define the jobs that will be run. According to the GitHub Actions docs, you can have multiple jobs and they will run in parallel by default unless you specifically indicate that you want the jobs to run in sequence.
I only have one job in this case called deploy. This job has a name Upload Layer to AWS Lambda. We are also going to specify a ubuntu virtual machine to run on. We are not really doing anything exciting here so the distribution and version of linux is really not that important.
jobs:
deploy:
name: Upload Layer to AWS Lambda
runs-on: ubuntu-latest
The next section defines the steps that will be run. The first step is to checkout the code. The workflow is running in a virtualized environment so we need to grab our code from the repo before we can do anything useful.
steps:
- name: Checkout
uses: actions/checkout@v2
The next bit is to configure an AWS user that we can use to upload the files safely to S3.
It’s important that the user that is created only has minimal permissions on aws! The user should only be used for github access!
It’s probably a good idea to create a separate bucket on S3 to receive the files that will be uploaded to AWS layers. To lockdown the user, I created two policy files, One to allow uploads to a specific bucket on S3, and a second one that allows layers to be installed on Lambda.
The upload policy looks like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:PutBucketNotification",
"s3:AbortMultipartUpload",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::my_bucket_name/*",
"arn:aws:s3:::my_bucket_name"
]
}
]
}
The policy to add layers to lambda only needs one action!
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"lambda:PublishLayerVersion",
],
"Resource": "*"
}
]
}
You could certainly combine the two policies if you wanted to but I prefer making smaller policies that I can attach to different groups of users if I need to.
We will load the configure_aws_credentials action which can be found in the aws-actions repo on github.
You will need to store the AWS credentials for the user you created earlier in a secret vault on GitHub. DO NOT PUT YOUR CREDENTIALS IN YOUR REPO - of course, you already know this, right? Instructions for stashing your credentials securely on github can be found here
Something to take note of here is that the secrets are stored on a per repo basis. This means that you could create a separate AWS user for each github repo - a pretty good idea if you are remotely paranoid about such things. Even better would be to set up a test account using AWS Organizations to keep your test code separate from other environments - but thatās beyond the scope of this article.
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-2
Load up python - Iām using python 3.8
- name: Setup Python 3.8
uses: actions/setup-python@v1
with:
python-version: 3.8
Finally, the interesting part. This is where we build the distribution, copy it to S3 and then load it into lambda layers. In order for the code to be installed correctly in the layer, it needs to be in a directory called python. The run section below, creates the directory, installs all of the dependencies and then install the code. For the release_id, you can really use anything you want. The format command below is going to create something like MyRepo-BranchName-dev.zip. If you look at the github docs you can find a bunch of other things you can stick on there.
Side Note: Instead of using a standard requirements file, i.e., requirements.txt, I have created a requirements directory with a dex.txt, test.txt and a prod.txt. This allows me to control what is being installed in the production environment which is important when developing for lambda. For example, you donāt need to load boto as lambda already has it, you also donāt want to load up any dev or test tools to the production environment. For that matter, you also do not want to load up all of your tests and documentation files to production, so you should consider writing filters to remove (or not include) this stuff when building your layer code. A simple way to do this would be to use the āexclude flag in zip (check the man page for more info on how that would work).
- name: Zip it all up and upload to S3
env:
release_bucket: my_bucket_name
release_bucket_uri: s3://my_bucket_name
release_id: ${{ format('SomeId-{0}-dev.zip', github.head_ref )}}
release_layer: ${{ format('SomeId-{0}-dev', github.head_ref )}}
run: |
mkdir python
# assuming your requirements file is in 'requirements/prod.txt'
pip install -r requirements/prod.txt -t python
pip install . -t python
echo building release $release_id
# zip it up
zip --quiet -r $release_id python
The code is then compressed with zip and the file is copied to S3. Once the file has been uploaded, the aws command line tool is used to publish the layer in AWS Lambda so that it can be used from other lambdas.
# copy the file to S3 and install it in lambda layers
aws s3 cp $release_id $release_bucket_uri
aws lambda publish-layer-version --layer-name $release_layer --content S3Bucket=$release_bucket,S3Key=$release_id --compatible-runtimes python3.8
When you are done building your workflow, push it up to github and try it out by creating a new pull request. Donāt be discouraged when it does not work on the first try. In fact, a solid strategy for this would be to build up the file section by section so you can see how it all works. You can see the actions running in the Actions tab in you repo. You should see a history of all your prior runs in this tab too.
GitHub Repo Configuration
The next step is to get your GitHub repo configured - this is going to be a bit different for each team/repo depending on what workflow your team is choosing to use. I personally use a very simple main/feature branch workflow since most of my repos and teams are fairly small so we rarely have to worry about clobbering each otherās changes. This simple workflow is also very fast which allows us to move changes into production multiple times a day. With all that said, go to the Settings tab in your repo and select Branches. We want to create a a new rule in Branch Protection Rules - so click Add Rule. Thereās a full description of the options in the GitHub Documentation.
There are a bunch of interesting options in here that can make your development process more secure and less prone to accidents. The option we are particularly interested in is Require status checks to pass before merging. This allows us to hook in our GitHub Actions and will require the Actions to run cleanly before the code can be merged back into your main branch.
GitHub Action Workflows
With the branch rules configured you will now find that the pull request cannot be merged unless the workflow completes successfully. Now that the layer is installed in your test environment, a code reviewer can look at code and then go to the environment to make sure everything is working they way they would expect it to.
Of course, you can do all kinds of testing once you get a couple of workflows running. A few ideas:
- Blocking merges until all the unit tests pass
- making sure that static testing passes
- integration tests - like running api checks on the test environment that the layer was just installed in
Let your imagination run wild!