Hi and welcome to the first part of my new series Diving into Docker. Now this is a series I have wanted to write for a long time. The docker toolset is an amazing set of tools that has completely revolutionised the way I approach web development. I have been told I go on about it way too much! Anyway hopefully after this series you'll be converted! So what are we going to cover...
Disclaimer: This series will aim to give you a well rounded grounding in the Docker ecosystem but due to it's nature will not be Docker from end to end. I will provide links for further reading.
- The basics - Which is to say this blog post. We'll explore the Docker ecosystem and see what tools are available to us and get them setup. We'll get some interactive experience with Docker manually and then as an exercise we will build our first image.
- Dockerising an application - This part of the series will illustrate how I go about dockerising an application so it lays a solid foundation for the future. This will also explore concepts of data persistence and other container related principles
- Moving to the a stack application - Our first look at how we can break a multi service application stack into containers using Docker Compose
- Building a development environment with Docker - Using what we have learned to date in this lesson we will build a reusable development environment so that when the time comes for a project you'll never have to worry about setup again!
- Exploring multihost Docker - Finally we will the take a look at how we can use Docker across multiple hosts to utilise a larger pool of resources and illustrate some of the principles of high availability
If this sounds interesting then lets get started
With all the above said I suppose I should explain what the hell is Docker anyway? Well in its simplest form Docker is a set of tools which provide the ability to run containers. Containers are individual isolated spaces in which our application code can run. They are repeatable units which can be run anyway irrespective of what OS your running. In a lot of ways a container can be considered very similar to a virtual machine with one crucial difference. A container is extremely lightweight due to the fact it does not run a full OS within a hypervisor. It runs a virtualised filesystem within the existing kernel. As such it shares underlying host resources rather than allocating a block of them to a full guest operating system. Docker is officially described like this:
Docker is the world’s leading software container platform. Developers use Docker to eliminate “works on my machine” problems when collaborating on code with co-workers. Operators use Docker to run and manage apps side-by-side in isolated containers to get better compute density. Enterprises use Docker to build agile software delivery pipelines to ship new features faster, more securely and with confidence for both Linux and Windows Server apps. 
Currently as of writing the Docker ecosystem is comprised of four tools:
- Docker Engine - The core runtime responsible for running containers on a single host. This is where our focus will be for this post
- Docker Compose - A tool providing the ability to run multiple applications in a single stack
- Docker Machine - A tool which allows us to provision Docker ready machines in a variety of cloud providers and virtualisation software
- Docker Swarm - A tool for unifying several docker hosts (often created with Machine) into a single engine. It also providers orchestration tools for deploying containerised applications across a cluster of machines.
To install Docker use the appropriate link for your operating system:
- Linux - https://docs.docker.com/engine/getstarted/linuxinstallhelp/
- OSX - https://www.docker.com/docker-mac
- Windows - https://www.docker.com/docker-windows
For Windows & Mac these links should install the full toolset above. For linux use these links to get your missing tools:
Running our first container
Running our first container couldn't be easier! To run a container which just echo's out some text:
- Open the terminal
docker run --rm hello-world
This will download the image from Dockerhub (a centralised registry for storing Docker images) and provide the following output:
You can read more about Dockerhub from this link: https://docs.docker.com/docker-hub/
Unable to find image 'hello-world:latest' locally latest: Pulling from library/hello-world 78445dd45222: Pull complete Digest: sha256:c5515758d4c5e1e838e9cd307f6c6a0d620b5e07e6f927b07d05f6d12a1ac8d7 Status: Downloaded newer image for hello-world:latest Hello from Docker! This message shows that your installation appears to be working correctly. To generate this message, Docker took the following steps: 1. The Docker client contacted the Docker daemon. 2. The Docker daemon pulled the "hello-world" image from the Docker Hub. 3. The Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading. 4. The Docker daemon streamed that output to the Docker client, which sent it to your terminal. To try something more ambitious, you can run an Ubuntu container with: $ docker run -it ubuntu bash Share images, automate workflows, and more with a free Docker ID: https://cloud.docker.com/ For more examples and ideas, visit: https://docs.docker.com/engine/userguide/
Once the container exits we tell Docker to remove the stopped container. We can see this by running the command
docker ps -a.
It's important to note that once a container stops it exits. Therefore whenever we build containers we want to keep running it should have a process in the foreground. Also all containers are created from images. This is a prebuilt blueprint or file which defines the environment inside the container such as apt packages / scripts. We will build our own now as the
helloworld image was not really all that useful
Building a web server interactively
To start off lets build a simple apache web server interactively. This will allow us to get a feel for how to build an application but it won't persist as an image we can reuse. We will translate our image into a
Dockerfile later so it can be reused but for now lets build our web server service inside a container:
- First choose a base. Dockerhub provides many official bases from which we can build our images I'm going to choose
debianby pulling it with the
docker pull debian:latest
- Next I'm going to run our base with some ports exposed. By default a container is not accessible to the outside. I expose the ports by using the
-pflag with a mapping and specify to the base I would like a
bashshell that is interactive with the
-tiflags. I also want to remove the container on exit:
docker run -ti --rm -p 80:80 debian bash. This will drop us into a shell with a unique container ID hostname
root@<id>. We can now run our commands to setup.
- First run
apt-get updateto populate our container with a sources list
- Next run
apt-get install -y apache2
- Start the apache service in the foreground
- Visit http://localhost
- Exit our container by pressing
That's much more useful! But we don't want to do this by hand every time and its not reusable. Lets create a
Dockerfile so we can build and run this image more easily
Creating a Dockerfile
- Create a
touch Dockerfilein the terminal. Open this in your text editor
- Next we use the
CMDcommands to recreate what we did interactively as follows:
FROM debian:latest RUN apt-get update && \ apt-get install -y apache2 EXPOSE 80 CMD service apache2 start && sleep infinity
Note I combine our commands using a single
RUN key. Docker images work by layers and each command in a Dockerfile results in a new layer. Combining things in this way results in a smaller image. Also the
CMD is slightly different as FOREGROUND doesn't play nice with Docker
- Next we build and tag our image by running
docker build -t webserver .
- Run the webserver image
docker run -ti --rm -p webserver. We should get the same result.
Great! Our image is now a lot more reusable and can be built by others if we provide them with our
Dockerfile. You'll often find that when using Docker official images from the hub rather than custom images will be better. However there will be cases where you'll need something bespoke and building a
Dockerfile is a better option. For a full Dockerfile reference see this link: https://docs.docker.com/engine/reference/builder/
Well thats it for an introduction to Docker! To recap we have:
- Installed the relevant Docker tools
- Run a prebuilt image from Dockerhub
- Interactively created a dockerised web service
- Translated this into a
Dockerfilewhich is easier to reuse and be built by others
As you can appreciate this is only the tip of the iceberg. The possibilities with Docker are endless. Be sure to explore Dockerhub for other images. Try to see if you can get them working. Also explore creating your own Dockerfiles with different bases and see how far you get. Unlike programming there isn't really a set way to learn Docker with a project and the best way is experimenting and reading through documentation.
- Dockerhub - https://hub.docker.com/
- Official Docker Images - https://hub.docker.com/explore
- Dockerfile Reference - https://docs.docker.com/engine/reference/builder/
- Docker CLI Reference - https://docs.docker.com/engine/reference/run/#detached-vs-foreground