Getting started with docker
Docker is a popular way of packaging your applications, or micro services into container images, which can be copied to any running docker server. Docker uses a linux kernel feature - namespacing - which allows it to run applications in isolation away from the operating system, much like virtual machines, avoiding common pitfalls of virtualization like CPU and RAM allocation. Each docker container is much like any standalone application, except for the fact that it’s isolated away from the host OS.
I guess a bit of a warning is in order: docker is far from enterprise in the sense that you can build your setup in many different ways. For setting up a docker server host you can run a full blown linux distribution like Debian, Ubuntu or others, or you can set up a CoreOS linux distribution, which provides little more than just docker.
And then, you can run docker images with docker
, or you have the option
of running more of them with docker-compose
, fleet
, and other tools, each of
which varies in functionality, and many of them are hard to use. Services
and additional tools like shipyard are popping up which will, with time,
make management and migration of docker containers easier for everyone.
Much like any new technology, it needs to go trough various testing stages where people can identify pitfalls, or best practices, before deploying it in production environments. While containers are easily deployable, their management still seems to be utterly complicated. But alas; you live, you learn.
Finding an use
I’ve had a stationary computer in my flat for most of my adult life. It was everything from a web server, a HTPC, a file server, a backup server, and many other iterations of software which I believed to be useful at the time. With time, specialized hardware entered my home which solved the most basic problems in an user friendly way - most notably, a WD TV, and a Ouya.
While I love virtualization, setting up a home virtualization environment today is still pretty costly. A mainboard with an 8 core offering would come up at about $400, before adding RAM and disk space. Docker can be used on commodity hardware, which makes it a very attractive option for home or laptop use. With smart planning, docker can replace many common needs today - like hot-spot instances for example.
Quick basics
A docker image is a read-only volume, which contains the application(s) which you want to run. A docker container is a running process, which used an image to start - it can write files, change configs, execute programs, but when you stop it, you loose all those changes, which were made in the container.
A container can have any number of volumes attached to it. The volumes are stored in the host operating system, and keep data between container restarts. This is where mysql should keep it’s data, where samba keeps it’s files, and so on. Volumes can be shared between containers, but only on the same host.
Useful commands:
docker search
- searches the docker HUB for images
docker run
- runs any specified image
docker exec
- runs commands in an running container
docker stop/restart
- restart or stop running container
docker rm
- remove container (only when stopped)
File server
So, I started with a file server. Both OUYA and WD TV need a Samba file server,
from which they can pull movies and music. Using dperson/samba
it was easy
to set up a working samba server. But disaster strikes - WDTV also needs
a running nmbd service, which advertises the Samba shares. While I researched
which ports this service uses for operation, I didn’t come any closer to
making it work with just forwarding udp ports 137 and 138.
In the end, I discovered the --net
option with docker. By default, docker
runs in “bridge” mode, which is not enough for running the nmdb service. And
the chosen docker image also doesn’t start nmbd by default, so we have to do
that ourselves:
# run docker container in net=host mode, setting up users and shares
docker run --net=host --name samba \
-v /mnt:/mount \
-d dperson/samba \
-u "admin;admin" \
-s "public;/mount;yes;yes;yes;all"
# run nmbd service
docker exec -it samba /etc/init.d/nmbd start
Other containers
Other containers are available on Docker Hub, and you can search for whatever
you need. For example, you can search for emby
, a home media server:
# docker search emby
NAME DESCRIPTION STARS OFFICIAL AUTOMATED
emby/embyserver 20 [OK]
dperson/emby Emby media server container 2 [OK]
geertbongers/emby Emby server 0 [OK]
emby/embyserver-beta 0 [OK]
sdhibit/emby 0 [OK]
As you can see, there are several images available, knowing which one to choose is a bit of a workload for you. Different images support different configuration options, and knowing what you want and who you want it from is not trivial.
Official containers
Some containers are provided by official sources, for example mysql
and percona
both
provide flavors of their SQL server offerings, which are tagged as official builds.
# docker search percona
NAME DESCRIPTION STARS OFFICIAL AUTOMATED
percona Percona Server is a fork of the MySQL rela... 44 [OK]
...
# docker search mysql
NAME DESCRIPTION STARS OFFICIAL AUTOMATED
mysql MySQL is a widely used, open-source relati... 1115 [OK]
With some of my clients, production usage of such images should be avoided. Many services, especially databases, have significant usage requirements and run on dedicated hosts, bare metal or virtual machines. While certainly the images are usable in the case of setting up a microservices environment, a very typical scenario in real world production environments is orchestration of tens and even hundreds of database clients, and also database hosts. While orchestration tools exist for docker, you’d have to have tens of servers, before using these tools would be less overhead than maintaining the servers with a little bit of scripting automation.
I cannot say I’m convinced that docker is the right way to build services with a large number of containers, but it might be just my bias - if you’re starting a service from scratch by using docker from the start, it might be easier to learn how docker behaves in a smaller environment, and slowly learning to scale it to a multi-host, multi-datacenter setup.
Final thoughts
Docker is still quite volatile when it comes to changes being made in the whole ecosystem, but the benefits are clear. It takes a bit of time to wrap your head around the concepts of how the whole thing actually works, even beyond the scope of this short introduction. But the value is immediately clear - you can replicate your setup quickly, and you can provide a level of cleanliness which just isn’t possible if you’re installing several daemons and services into one virtual machine or any linux OS.
A most typical process when building a web application is to stuff all the possible servers, services and applications into one host. With application evolution or growth, some of the servers outgrow the capacity of the host, needing additional hosts for a more dedicated approach. With docker, this migration from one machine to several machines should be slightly easier. I’m saying should, because at this point, the value proposition seems to center around the fact, that these services are more cleanly isolated within one host. And so far, I haven’t seen an example which grew out of that - only a few examples where docker was being used at scale from the very start.
Perhaps the most valuable thing about docker is also the most overlooked thing about docker. You can spin up a container with software you’re going to test - elasticsearch, sphinx, solr, whatever you need, and be sure that if your tests indicate a clear winner, that you can remove all the other containers and images and leave your host OS in the state it was before - clean.
And a very helpful hint, if you’re planning to develop scalable architecture for your product: follow The Twelwe-Factor App methodology from the start; it will save you a bunch of effort in the long run.
While I have you here...
It would be great if you buy one of my books:
- Go with Databases
- Advent of Go Microservices
- API Foundations in Go
- 12 Factor Apps with Docker and Go
Feel free to send me an email if you want to book my time for consultancy/freelance services. I'm great at APIs, Go, Docker, VueJS and scaling services, among many other things.
Want to stay up to date with new posts?
Stay up to date with new posts about Docker, Go, JavaScript and my thoughts on Technology. I post about twice per month, and notify you when I post. You can also follow me on my Twitter if you prefer.