Docker—a brief introduction

At Studio None, we’ve been using Docker since the beginning of the year and it has massively improved the way we handle our deployments. Docker’s primary objective is to wrap up an application and all its dependencies into a neat little container that can be deployed just about anywhere.

Containerisation isn’t necessarily new, but Docker brings new efficiencies that make sense for development teams making use of continuous deployment. Moreover, Docker makes it easy to scale horizontally and across availability regions.

Understanding how docker works can be easily visualised.

Docker vs Traditional

In many traditional configuration, a single application is bundled up with the operating system and is mounted on a hypervisor. This uses a considerable number of resources from RAM to Disk Space & CPU. Additionally, you must maintain multiple guest operating systems and keep their libraries up-to-date with the requirements of your application manually.

Docker provides a layer of automation and separation from the traditional configuration by only containerizing the components that are necessary for the application to run while providing a level of operating system access that is locked-down enough to ensure your process cannot modify the operating system or affect the processes running in other containers.

It’s recommended practice to run only one application per container. For examplem, if you’re running a Node.js application, nginx server and PostgreSQL database; you would containerize each seperate process while providing links so that they can interact.

 Docker Containers, Images & the Registry

Docker makes it amazingly easy to create an application with the libraries it needs to run on top of the Linux kernel. It does this by providing a registry of images to load onto a container, and by layering the changes to an image to make it faster to download & deploy.

 Creating Containers and using Images

I’m not going to run through how to install Docker, if you’re looking for that guide, Docker has provided one themselves: https://docs.docker.com/mac/started/.

Like I said, Docker contains file system layers that include libraries & bins required for your application to run. To see how this works, try creating an Ubuntu container using this simple command:

docker run -i -t ubuntu /bin/bash

The command will use the Ubuntu (ubuntu) image to run a single process (/bin/bash) which is made interactive (-i) and allocates a pseudo-TTY (-t) allowing us to emulate a real-text terminal interface.

Docker will now download the layers from Docker’s public registry required for that image. Similar to source control, Docker stores changes to the file system over time—you can roll back to a layer or merge layers into a single layer depending on your functionality. These layers can be stored in images and uploaded to the Docker registry.

Once downloaded, you can interact with a containerized Ubuntu operating system. You’ll notice if you exit the process that the docker container will shut down. You can check this by running:

docker ps

Any changes you made to that machine will now be lost. So, to commit machine changes and create your own image, you can commit the image from the host operating system while the container is running. There’s a big guide on how to commit, pull and push Docker images in the Docker documentation.

 Dockerfiles & Portability

Dockerfiles are the easiest way to automate image creation—allowing you to describe the way to create an image. An example of a Dockerfile required to create an Ubuntu image:

FROM ubuntu:14.04
RUN mkdir -p /data/
RUN echo "Hello" > /data/hello.txt
CMD /bin/bash

Dockerfiles control the processes that you want to run by automatically executing run & start of a docker container. Try and keep the RUN commands to a minimum as Docker is young & buggy when working with RUN commands in the 30s or 40s. It’s best to put the majority of your work into a shell script and execute that shell script.

For more information on the Dockerfile, check out Docker’s Documentation.

If you’re running a mission-critical application like MySQL, PostgreSQL or even Redis—it’s best to utilize Supervisor. As mentioned earlier, once the process ends the Docker container quits and all changes are lost. Supervisor carefully monitors the processes running and if they crash, will bring them back.

 More to come

There are a tonne of tools to help you work with Docker in production—one of the better tools is Rancher; which I’ll write about at a later date. I’ll also talk about limiting memory uage and auto-scaling containers on DigitalOcean.

 
0
Kudos
 
0
Kudos

Now read this

An explanation of Amazon’s recent S3 outage

Amazon Web Services recently experienced a three-hour long outage within its simple storage service (S3) offering. The outage only lasted a few hours, but had massive ramifications for dozens of sites hosted in the US-EAST-1 (Virginia)... Continue →