Docker is awesome. But the Docker technology can be used many different ways depending on the use case, the technology context and the developers or devops’ praxis.
In my point of view that containerisation technology requires most of us to change our mind on the way of seeing ops in our domain. In most of the use cases, there is a misunderstanding of the benefits of this technology if you are not using immutable images.
We probably have to change our mind
Except a very few like Google that runs containers since almost 10 years, we used to/are using a server or a VM to host our web applications. Simply ask (if it’s not you) the “system” or “devops” team to install and configure whatever it is needed to run that replaces with the language you like web application. Once they finished, we can now updates the source code deployed on the server by using Capistrano or any other deployment tool. The result is that the code is updated on the web server(s).
Then, the Docker popularity exploded. Everybody started to use it and most of us replicated the same logic: lets have a container running the web server and mount the source code from on the host or use a data-container. In that case, we are using Docker only to ease the server setup.
This setup is using what I name mutable images because if you change the contents of the mounted volume, then you change the behaviour of the container. That means that if you run that same image on another machine, then the container will potentially have a different behaviour than the one running on the other machine. We do have a kind of portability, but we don’t have the predictability at all.
Why immutable Docker images?
When we are talking about software design, the literature around immutable objects is vast. The devops culture is about joining the devs and the ops, so why not introducing immutability here too? An immutable image is an image that contains everything it needs to run the application, so obviously including your source code.