If you are developing AWS Lambda functions, you may wish to test those functions without deploying them to your AWS account. There are a few good reasons why deploying to AWS is not the first choice for this (and cost is not one of them).
- Speed – a local deploy has a much shorter roundtrip time
- Visibility – hopefully a local deployment will give you the ability to see logs and maybe even run a debugger
- Resource management – you could have some spare AWS environments kicking around, or could spin up variants on demand, but that can get quite complicated to manage, especially for running a small integration test
- Access control – you may not wish your AWS account to be open to CI builds running tests
Whatever your reasoning, let’s skip to the answer.
To test AWS Lambda Locally
You should use:
- AWS SAM CLI
- Your favourite test library
Great. Job done… but wait… how?
AWS SAM CLI is a pig to run in Docker
The AWS SAM CLI itself uses lambci which is a docker image. In order to run AWS SAM CLI you may want to install it locally and just use it via the sam local start-api command. And bless you if that works for you.
If you want to dockerise this, or if, like me, you can’t be sure of every user of the test suite, including the CI server, having the CLI installed, then there’s a world of potential pain ahead.
A Configuration That Works (for me)
This example is optimised around a lambda to serve an http request via API gateway.
- A home-made docker image with the AWS tools in – SAM and AWS CLI
- A local directory with the template.yml file in which defines the service (sam init will make that for you)
- A path from that same directory to the artifact to run in the lambda
- A writeable temp directory at /tmp
Here’s the Dockerfile I’ve been using:
RUN pip install awscli --upgrade
RUN apk add gcc
RUN apk add libc-dev
RUN apk add curl
RUN apk add python-dev
RUN pip install aws-sam-cli
ENV AWS_DEFAULT_REGION EU-WEST-1
ENTRYPOINT sam local start-api --host 0.0.0.0 --port 3000
This starts with Python, adds in the aws tools, opens port 3000 for the API gateway to serve the lambda on, sets an arbitrary region (my lambda doesn’t use any AWS resources, so this doesn’t matter to me) and then starts the api.
This docker container will be started from the directory which contains my template.yml and will map that directory to the /var/opt folder on the docker image as a volume. I’ll come to that in a moment.
Note: the host has been set to 0.0.0.0 in order to allow the api to be accessed from outside of the docker container itself. By default, it only listens to 127.0.0.1 which doesn’t allow an external test to get into it.
In my working example, the above docker container is started, along with the subsidiary services, using docker-compose.
Note that this is doing some of the magic:
- The docker socket of the real host is used to allow the docker command inside the container to start the lambci container
- Port 3000 is exposed as 3000 on local host
- The current working directory is mapped as /var/opt inside the container
- The real /tmp directory is mounted in the container too
The reason you need to mount /tmp is that the SAM CLI will pretty much only work if it thinks it is exploding the zip/jar file of your service for you and mounting the exploded version onto the lambci container it starts. The only configuration I found to work is to share the real-world /tmp. The main root cause of this is that the docker command running in the SAM CLI docker container is actually executed by the surrounding host. SAM CLI makes no allowances for the fact that its own perception of the path may not be the one that the actual docker would be able to mount.
The above is from a working setup, but… my lambda needed to talk to a service I also stood up with docker compose. As my lambda was run in a sibling docker container to that service, I had to define a docker network as join both the service and the lambci container with that network. This was done with:
- Adding the ––docker-network switch to the sam local start-api command
- Defining the network outside of docker-compose
- Having docker-compose use it as an external network
- Connecting that network to the service my lambda wanted to use
- Forcing the sam-local service use bridge network_mode, rather than default to the external network mentioned in the docker-compose.yml
For simplicity, I’ve left the above out of my code snippets as it probably needs a fully working example to understand it. If enough people want that I’ll write a follow up. You know where the comments section is.
You can run AWS SAM CLI in docker, but it’s tricky. The team haven’t put enough into the flexibility of the tool for it to be easy, but the above techniques seem to work.