Published on

Deploying machine learning models to AWS Lambda, with Connexion, Zappa and Docker

Authors

We are gonna learn how to easily deploy any Python3 application to AWS lambda without having to change any code or even go into the lambda console.

Our proposed architecture:

-Python 3. A popular programming language for data scientists.

-Flask. A micro web framework, that works with little dependencies and libraries.

-OpenAPI 2.0, formerly known as Swagger 2.0.

-Zalando’s Connexion. handles routes, requests and response validation automatically from OpenAPI definition.

-Docker.

cd micro-python

Start by downloading our boilerplate code application from github

Meet Micropython. A lightweight API microservice designed for easy deployment we open-sourced.

Find source code here.

git clone https://github.com/BrainrexAPI/micropython.git Create a virtual environment.

cd micro-python virtualenv -m venv myenv source venv/bin/activate Run the application locally to check everything is working.

pip install -r requirements.txt python app.py Open http://0.0.0.0:5000/ui in a web browser and you should see this:

Now let’s deploy to Lambda if you haven’t set up your credentials locally. This is a fast way to do it. First, get authenticated. For the sake of simplicity of this tutorial we are gonna use very permissive permission ( DO NOT use these settings in production)

Go to Console > IAM > Users > Security Credentials > Access Keys

Download Access Keys CSV

aws configure

export AWS_DEFAULT_REGION=us-west-2 export AWS_PROFILE=default

Build your Docker image from Dockerfile.

$ docker build -t myimage . Now we create an alias for better reusability. This command runs the Docker image using your AWS_PROFILE as an argument. Then it uses the docker image you just pull

alias micropyshell='docker run -ti -e AWS_PROFILE=zappa -v $(pwd):/var/task -v ~/.aws/:/root/.aws --rm myimage' Add alias to your bash_profile to be able to run command.

alias micropyshell >> ~/.bash_profile Run the command you just created. This will run the docker image with your AWS configuration on it. Inside this image, we will still our dependencies and deploy to Lambda with Zappa.

micropyshell Now you should be inside the docker container shell.

micropyshell> Let’s create a virtual enviroment inside container.

virtualenv venv source venv/bin/activate Install any python package inside your docker containers.

pip install -r requirements.txt Deploy your services to Lambda. After you press this here is what happens after the scenes.

From Zappa docs. Credit: https://github.com/Miserlou/Zappa

zappa deploy dev After you should be given an API gateway URL like this

#Troubleshooting Not finding config file

botocore.exceptions.ProfileNotFound: The config profile (default) could not be found

Error 1. Profile not found.

  1. This could be an error when running your python program Zappa tail to see your logs.

You can also see your logs in Cloudwatch by going to the API gateway.

500

"slim_handler": true, "memory_size": 3008,

Problems with NLTK Library.

Solution: https://stackoverflow.com/questions/42382662/using-nltk-corpora-with-aws-lambda-functions-in-python

Create a Enviroment Variable in the Lambda Console.

You are using python 2.7 It won’t work Check python

Here’s a weird one. Your package requires GPU

from thinc.neural.util import prefer_gpu, require_gpu

Answer: https://stackoverflow.com/questions/49186886/spacy-throws-oserror-when-deployed-to-aws-lambda-using-zappa

References

https://aws.amazon.com/blogs/machine-learning/how-to-deploy-deep-learning-models-with-aws-lambda-and-tensorflow/