Docker Container Orchestration

Date:

Tags:

In the first post on how we moved our application to Docker on AWS, I described the reason for us to do it, and how we set up the servers. This post describes how we’ve organized the different Docker containers.

Container Setup On AWS

Our setup on AWS looks like this:

Qwaya AWS Setup

We use an EC2 load balancer to route traffic to our web hosts. This is an easy way to setup clustering, but you should be aware that EC2 does not allow for any alternative route or html page if no hosts are available. This means that if all instances are down for some reason, the only thing shown is a blank page. This has been a long standing feature request which for some reason hasn’t been implemented yet.

Linking Docker Containers To Form Pods

Our web hosts is what has come to be called a pod these days. Every pod consists of two parts, an Nginx container which serves static content and proxies calls to the backend container, which is running our Django app in Gunicorn.

Now, the vast majority of the static content served by the Nginx container is generated by the Django app. If the Nginx was to host that content itself, we would have to redeploy both containers every time we updated the backend.

To mitigate this, we use container linking which allows the Nginx container to serve content from the backend.

The backends Dockerfile exposes the directory with the static content:

FROM qwaya/our-own-debian:latest

...

VOLUME /opt/app/generated_content

...

The Nginx server is configured to read from the path

server {

    ...

    location /static {
        alias /opt/app/generated_content;
        expires 24h;
    }
}

Finally the docker containers are started, the backend container exposing the volume,

# systemd config
ExecStart=/usr/bin/docker run --name backend \
    -p 8000:8000 \
    qwaya/app-backend:latest

and the frontend linking to it.

# systemd config
ExecStart=/usr/bin/docker run --name frontend \
    --link backend:backend \
    --volumes-from backend \
    -p 80:80 \
    qwaya/app-frontend:latest

Workers For Asynchronous Jobs

The workers are responsible for asynchronous jobs, and can be scaled horisontally. Historically on our one server setup, falling behind on the job queue could lead to long periods of catching up. When we launched the new architecture on AWS, we initially started to few workers, and built up a large backlog of jobs. It was a great feeling when we easily solved it by starting 10 additional servers, which worked through the queue in 15 minutes.

Backing Everything With RDS

Just to comment on the last piece of the above picture, we use RDS for storage. I can’t say that I love every piece of AWS, but RDS is really great as it makes setting up and managing MySQL or PostgreSQL very easy.

Docker-compose for development

In development, we’re running the same setup as above with the exception that we’re using the local filesystem instead. We’re using docker-compose (formerly fig) for this:

# Database image with an easy to guess password
databasemaster:
  image: mysql:5.6
  ports:
    - "3306:3306"
  environment:
    MYSQL_ROOT_PASSWORD: toor
  volumes:
  - tmp/mysql/:/var/lib/mysql
# Mailcatcher is awesome for testing mailings.
mailcatcher:
  image: schickling/mailcatcher:latest
  ports:
    - "1080:1080"
# The backend backend, which sets up mappings to the database
# image. Mailcatcher is also setup as a link and dev
# configurations point to it.
backend:
  image: qwaya/app-backend:latest
  ports:
    - "8000:8000"
  environment:
    - DB_HOST=database
    - DB_USER=root
    - DB_PWD=toor
  links:
  - databasemaster:database
    - mailcatcher:mailcatcher
  volumes:
    - <filemappings>
taskrunner:
  image: qwaya/backend:latest
  environment:
    - DB_HOST=database
    - DB_USER=root
    - DB_PWD=toor
    - PROCESS_NAME=task
  links:
    - databasemaster:database
    - mailcatcher:mailcatcher
  command: <task runner command>
  volumes:
    - <file mappings>
frontend:
  image: qwaya/app-frontend:latest
  ports:
    - "80:80"
  # Map to the backend
  links:
    - backend:backend
  # Automount volumes from backend
  volumes_from:
    - backend

Summary

The isolation and composability of containers is really useful, and has helped improve a not so great one server setup without any big changes to the code. Being able to run the very same setup on your laptop as in production is truly awesome.

The next post will explain how we setup our build pipeline to support the new architecture.