Moving From Amazon S3 to DigitalOcean

Recently I recieved an email from DigitalOcean asking why I had left them and haven’t been back. I’ve never had a problem with them and quite honestly I prefer their services over Amazon’s (I like having more control). So I decided why not give them a try again.

Today I moved my website,, from AWS S3 to a DigitalOcean droplet in 15 minutes.

The main reason for this move are simple:

  1. Why not
  2. I felt like it
  3. My website is static (Middleman) and would be easy to test it with

Setting Up The Droplet

Before setting up the droplet, we have to understand the differences between hosting static content on S3 and a droplet. S3 is an object store. The data just lives there and it provides a way of setting up static content hosting by a flip of a switch. DigitalOcean on the other hand provides images where you can run full servers. It doesn’t come with an out of the box way of easily hosting static content without a little bit of configuration.

To easily start hosting static content on my droplet, I decided to use Docker with an NGINX image. I could have easily just installed Ubuntu with nginx and server my content that way but I would have to redo the setup process every time I would need to recreate my image. With Docker, I can just upload the needed files and run docker-compose up -d.

Dockerizing Static Content

I decided to go with Kyle Matthews’s docker-nginx image due to its simplicity.

# Dockerfile
FROM kyma/docker-nginx
COPY ./build/ /var/www # ./build is the directory where middleman outputs the build site
# docker-compose.yml
version: '2'
    build: .
    command: nginx
      - 80:80

You can test your setup by creating a new docker machine (or using an existing) and running docker-compose build && docker-compose up. Your static content should now be accessible at the machine’s IP address.

It’s Alive!

All that is left is to get it running in the droplet. To do this, I created a nice deploy script which automates the process of uploading the built files and restarting the nginx containers if needed.

#!/usr/bin/env bash

set -e

# Build app
middleman build

# Sync code
# The --exclude flags tell rsync to ignore those files or directories
rsync -avpz --delete \
    --exclude=/.git \
    --exclude=/source \
    --exclude=/.sass-cache \
    --exclude=/.gitignore \
    --exclude=/.editorconfig \
    --exclude=/config.rb \
    --exclude=/ \
    --exclude=/ \
    --exclude=/Gemfile \
    --exclude=/Gemfile.lock \
    /Users/mporter/dev/mporter.middleman/ -e ssh root@${CONTAINER_IP}:/root/www

# Build and restart container
ssh root@${CONTAINER_IP} "cd /root/www && docker-compose build && docker-compose up -d"

# Remove build dir
rm -r ./build

Run the script and everything is complete. The new server is all setup.