Multi-node docker deployments with persistent storage
A trait of Docker is that it doesn’t persist state. This is usually great when working with worker applications that don’t need to store and share data, however in the case of databases or even small WordPress instances, persisting state is important.
In this guide, I’ll take a look at how to setup a Docker & Flocker on DigitalOcean infrastructure.
To begin, start by spooling up three Ubuntu 14.04 nodes (you can use the Docker 1.7 template on DigitalOcean to save time installing Docker) and install Docker.
Also, make sure that you have included your local machine’s public key in the setup process for the virtual machines.
1. Installing Flocker CLI
Flocker’s CLI can run on basically any Unix system, but for the benefits of this review, I’ll only focus on Mac & Ubuntu.
The Flocker CLI will allow you to interact with your cluster’s nodes to manage deployments and migrations. Flocker will also handle moving containers. Don’t worry, Flocker CLI stores data between the nodes, so this is merely a client.
On Ubuntu 14.04, the Flocker CLI can be installed using ClusterHQ’s repo:
sudo apt-get update sudo apt-get -y install apt-transport-https software-properties-common sudo add-apt-repository -y "deb https://clusterhq-archive.s3.amazonaws.com/ubuntu/$(lsb_release --release --short)/\$(ARCH) /" sudo apt-get update sudo apt-get -y --force-yes install clusterhq-flocker-cli
Mac OS X
On Mac OS X, make sure you have Homebrew installed and use the doctor command fix anything that may be an issue before installing.
Now that your Mac is ready, add the ClusterHQ tap and install flocker.
brew tap ClusterHQ/tap brew update brew install flocker-1.0.3 brew test flocker-1.0.3
2. Setup Flocker Nodes
These nodes will make up your infrastructure and host the containers and data stored on the containers.
2.1 Intalling packages & dependencies
To make this a little easier, create a bash script using the guide below to initialize this install script across the nodes.
Create a file called flocker-init-node.sh and populate it with these contents:
#!/bin/bash apt-get -y install apt-transport-https software-properties-common add-apt-repository -y ppa:james-page/docker add-apt-repository -y "deb https://clusterhq-archive.s3.amazonaws.com/ubuntu/$(lsb_release --release --short)/\$(ARCH) /" apt-get update apt-get -y --force-yes install docker.io clusterhq-flocker-node mkdir /etc/flocker chmod 0700 /etc/flocker
Then run the command:
ssh -i <path to your ssh key> [email protected]<your node's IP> 'bash -s' < flocker-init-node.sh
2.2 Configuring cluster authentication
Flocker communicates between the different parts of your cluster using TLS.
We’re going to use the machine running Flocker CLI to create a certificate authority to connect and secure your cluster. Please note, keep the generated certificate and key private: the security of your cluster relies on these files remaining private.
flocker-ca initialize supercoolcluster
supercoolclusterfor your cluster’s name. The output will provide two files in your working directory:
cluster.certwhich we will need to use again later.
I’ll run though copying these files over to the server running the control service shortly. This is for the server that will be providing the HTTP-API for the CLI.
Next, create a control certificate for use by each of the nodes to communicate with the control service. The control service can run on one of the nodes, but it is recommended to run on its own service.
flocker-ca create-control-certificate hostname-of-your-control-service-machine.io
Change the hostname to whatever your control machine’s address is. Make sure that you use the FQDN and not an IP address as this may break some HTTPS clients.
This will produce a certificate file and a key which should be renamed to control-service.crt and control-service.key respectively. Copy both the control-service.key and control-service.crt file over to the server that will be running the control service; you should also copy the cluster.crt file. These should be placed in
/etc/flocker/on the control server.
chmod 0700 /etc/flocker chmod 0600 /etc/flocker/control-service.key
It’s also recommended that you secure the permissions of the key file.
Generate a certificate for each of your nodes. This is done simply by running:
Run this as many times as you have nodes. These certificates will look something like
h829d8j3-9832-983b-998j-28ch2oi3jsi3. Rename the certificate to
node.crtone at a time and copy them, along with your
cluster.crtfile over to each node’s
Also ensure that you secure the permission on these certificates.
chmod 0700 /etc/flocker/ chmod 0600 /etc/flocker/*.crt
Before you can use Flocker’s API, you will need to generate a client certificate, which you can do by running:
flocker-ca create-api-certificate <username>
Provide the username for the use in which the certificate should be created for.
Configure the control-node’s firewall so that it can be accessed remotely.
Add to the end of the file:
start on runlevel  stop on runlevel 
Add to services:
Add to the end of the file:
flocker-control-api 4523/tcp # Flocker Control API port flocker-control-agent 4524/tcp # Flocker Control Agent port
ufw allow flocker-control-api ufw allow flocker-control-agent
For more details on configuring the firewall, check out Ubuntu’s UFW documentation.
Once you’re done, start the Flocker control service:
service flocker-control start
2.3 Configuring the Flocker Agent
To start the agents on a new node, a configuration file must exist on the node in the
The file must include the version and control-service items similar to these:
"version": 1 "control-service": "hostname": "route to your control service hostname" "port": 4524
You will also need to setup a dataset: this is used when moving data between nodes (such as container migration or state).
For this tutorial, I will be using an S3 bucket. Simply add the dataset configuration to the rest of your
dataset: backend: "aws" region: "your region" zone: "your zone" access_key_id: "ABAHAHAHAWOOKIES" secret_access_key: "MuaHa/H+AHAhAhAHjiGg4h6cvlWHOomOHHk0"
Obviously swap out the
zone with your bucket’s details, then
secret_access_key with your data set with your AWS credentials.
Now reboot the server; both flocker-dataset-agent and flocker-container-agent should be started.
3. Creating containers
We’re going to practice by deploying MongoDB (because it’s lightweight and easy).
Start by installing the mongodb client:
sudo apt-get install mongodb-clients
Mac OS X users:
brew install mongodb
Start by creating an application file for this container, this file should be on the machine with CLI access.
"version": 1 "applications": "mongodb-test-com": "image": "mongodb" "ports": - "internal": 27017 "external": 27017
Then create a deployment file: deployment.yml
"version": 1 "nodes": "first-node-hostname": ["mongodb-test-com"] "second-node-hostname": 
I’m mentioning the second node that has no applications deployed on it to ensure that
flocker-deploy knows that it exists and that we’re not running this application on it. Note, these files are meant to contain multiple applications, nodes and deployment configurations—Which is a constraint of large-scale deployments with Flocker.