An Open Source AWS AMI for Training Style Transfer Models

Training Style Transfer Models Using an Open Source AWS AMI

Create Your Own Style Transfer Model

In keeping with our mission to democratize access to state of the art algorithms, we’re pleased to open source the AWS AMI needed to launch an EC2 P2 instance using GPUs, and the pipeline required to train your own style transfer models.

We recently taught you how to use the Deep Filter algorithm (our implementation of Texture Networks: Feed-forward Synthesis of Textures and Stylized Images) to get started with style transfer. But to create your own filter is no easy task – even for savvy developers.

Deep learning algorithms require GPU environments to run efficiently on, which can take a significant amount of time to configure and manage.

Our Amazon Machine Image will take care of installing the necessary software to your own EC2 instance, including the drivers, training dataset, deep learning library (Torch), and the TextureNet model.

You’ll then be able to train a model using an image of the artistic style of your choice.

The complete process of setting up and training your model will take approximately 26 hours and cost less than $25 per model created (about two hours to install your environment and 24-hours to train at $0.900 per hour using AWS p2.xlarge instances).

Once you’re finished with the tutorial, you’ll have setup a deep learning environment, trained a custom style transfer filter, and made it available as a scalable REST API – all without the hassle of managing dependencies or integration issues yourself.

Setup the Algorithmia Deep Filter AMI

Our AWS AMI holds almost everything you need to launch an EC2 instance ready to create your own image. This image includes the operating system, drivers, application server, and more.

Step 1: Choose the AMI:

Create or login to your AWS account and begin the process of launching an EC2 p2.xlarge instance from the EC2 dashboard.

First select Community AMIs, and then search and select the algorithmia-deepfilter-training-ubuntu AMI.

Choosing a Algorithmia Amazon Machine Image

Step 2: Choose Instance Type

Next, choose GPU compute p2.xlarge as the instance type.

AWS GPU compute p2.xlarge EC2 instance

Click Review and Launch. You’ll get a chance to review your instance before launching.

Step 3: AWS Key Pair

Use an existing key pair or create a new one on the following screen.

Set your AWS key pair

Download and save your key pair *.pem file to your machine.

Step 4: SSH Into Your EC2 Instance

Let’s SSH into your newly created instance and clone the script for installing the environment.  

Be sure to change the path to your key pair *.pem file and the IP address of your instance (available from your EC2 Dashboard)

ssh -i path/to/key.pem ubuntu@<server_public_ip_address>

Having issues connecting to your EC2 Instance? Check out the steps at the very bottom of this guide for resolving AWS SSH issues.

Next, clone the Deep Filter repository:

git clone

Step 5: Setup Your Environment

Now you have the necessary script to install the environment needed to train your model.

The following command will run the bash script, which installs the NVIDIA driver, CUDA, and cuDNN. It also installs Torch, a deep learning framework:

. deepfilter-training/

Notice the period in front of the script. Make sure you include it so the environment variables are added in the bash shell that the script was executed in.

This script takes an hour or two to run, so go get lunch.

Train The Style Transfer Model

Okay, that’s it for setting up your environment. Next we’ll show you how to add the image you want to use for style transfer, train the model, and then deploy it as an API.

Step 1: Download a Training Image

With your environment setup, let’s add the image you want to train your model with. The image needs to be saved to the texture_nets directory. We’ve made this easy for you so all you have to do is run this command:

wget <url_to_image>

Now rename the style image so the training model can find it:

mv <image_filename> style.jpg

Step 2: Train The Model

The script below will kick off the training of your neural net. The model will continue training until it hits 50k iterations, which will take approximately 24 hours.

If you don’t want to wait that long, a model file gets saved every thousand iterations. However, the fewer the iterations, the lower the quality in our experience.

th train.lua -data dataset -style_image style.jpg -style_size 600 -image_size 512 -model johnson -batch_size 4 -learning_rate 1e-2 -style_weight 10 -style_layers relu1_2,relu2_2,relu3_2,relu4_2 -content_layers relu4_2

Note: If you want to change the default parameters for training a network, please refer to texture_nets on GitHub.

Once it’s finished you’ll find all of your model files under the folder: /texture_nets/data/checkpoints

Here’s an example of what you might see when running the above command:

Sample Script Output

Once the model has been trained, you’re ready to test your custom filter.

Step 3: Setup CLI Tools

Here comes the fun part, testing your new image filter!

First, install the Algorithmia CLI Client:

curl -sSf | sh

Then login to your Algorithmia account:

algo auth

Enter your API key when prompted.

Next create a new collection in your account and store your trained model file:

algo mkdir .my/DeepFilterTraining

Now copy your model file to your newly created data collection (from path assumes you are in the texture_nets directory):

algo cp data/checkpoints/model_50000.t7 data://.my/DeepFilterTraining/my_model.t7

Note: If you did not run it for the full 50k iterations, then you will need to adjust the model number to last one you ran.

For example if you only ran it through 10k iterations then you would change the command to copy it to:

algo cp data/checkpoints/model_10000.t7 data://.my/DeepFilterTraining/my_model.t7

Want to learn more about how to use Deep Filter and the command line to filter images?

Step 4: Test Your Model With Deep Filter

Let’s test our new model. Upload an image to your Algorithmia Hosted Data collection, S3 bucket, or Dropbox that you want to apply your new filter to. Then, run the following command (in this example we’re using an image stored in Data Collections:

algo run deeplearning/DeepFilter/0.5.x -d '{"images": ["data://.my/DeepFilterTraining/your_image.jpg"],"savePaths": ["data://.my/DeepFilterTraining/stylized.jpg"],"filterName": "data://.my/DeepFilterTraining/my_model.t7"}'

Make sure you wrap the input arguments in single quotes.

Finally you can view your stylized images at this URL:<username>/DeepFilterTraining/stylized.jpg

Note: You need to be logged into your Algorithmia account and change <username> with your login name. Alternatively, you can go to Data Collections under your Profile and click the link under DeepFilterTraining collection.

And that’s it! You just created a custom image style filter.

Step 5: Calling As An API

Now that you have your filter, let’s make it accessible as an API that can be integrated into apps and services. We’ll use the path to our model file hosted on Algorithmia and pass it to the Deep Filter algorithm to stylize an image.

  "images": [ "data://deeplearning/example_data/test.jpg" ],
  "savePaths": [ "data://.my/temp/test_swirly_swirls.jpg" ],
  "filterName": "data://.my/DeepFilterTraining/my_model.t7"

Now you can train your own custom style transfer filter and call it as an API from your app, website, or service.

Here is an example of a stylized image using Alphonse Mucha’s Dance as the artists style used for training. Using our new custom filter we created this photo which we’ll just call the “Senior Citizen” filter.

Before and After Style Transfer

Show us your stylized images and let us know if you used a customized image filter @Algorithmia on Twitter!

Deep Learning on Algorithmia:

Resolving AWS SSH Issues

If you are having trouble connecting via SSH port 22 to your newly created instance setting up a new VPC might solve the issue. These instructions assume you are creating a VPC from start on your AWS account.

Start by going to your AWS VPC Dashboard.

Step 1: Create VPC

Create Your VPC

Name tag: myVPC
CIDR block:
Tenancy: Default

Step 2: Create Subnet

Name tag: mySubnet
VPC: [choose VPC you created in step 1][myVPC]
Availability Zone: No preference
CIDR Block:

Step 3: Create Internet Gateway

Name tag: MyInternetGateway
After creation right click and attach to VPC created in Step 1 [myVPC]

Step 4: Create Route Table

Name Tag: MyRoutes

Under Routes hit “Edit” and add a route with the following details:
Target: Internet Gateway created in Step 3.

Now your newly created instance should be accessible via SSH.