Things move fast in the container world and between the last article and this one Atomic Workstation has been renamed SilverBlue [1], most appropriate for my hair colour [2]! If you want to discuss container workstations I would urge you to come and join the community there [3].
In part 1 of this blog series [4] I spoke about what Project Atomic is and what it means to use it as your daily driver. As promised this time we are going to get our hands dirty and containerise a non-trivial application to get a feel of what using a container-based OS is really like. As I will assume that you are a developer if you are reading this, I’ve picked the fantastic IntelliJ GoLand [5] as the app we will containerise. Not only is GoLand a great IDE in its own right but the container we make can be used as a basis for containerising all of the other Intellij tools, all the dependencies are the same, so feel free to swap out the download url for the Intellij product of choice.
If you want to check out the finished code for this container take a look at https://github.com/ninech/atomic-blog-p2 which has a full working example. You can pull the container directly from https://hub.docker.com/r/ninech/goland/.
Just as all life evolved out of the sea our container evolution starts with a whale, or to be more specific, a big fat docker daemon.
Dockerfiles are still the de facto descriptive format for containers, regardless of what tools we use to build and run them later, so we will start with this when building our app.
This is pretty simple, but let’s review it quickly:
Once we have built this Dockerfile we should have everything we need to run it. Perhaps you already have a run command in mind, maybe it looks something like this:
docker run --rm -d --name goland -v
/home/${USER}/nine-goland:/home/developer ninech/goland
If we run this we will get the output:
Startup Error: Unable to detect graphics environment
Oh no! Unfortunately we were too hasty with our simple run command, so this is the first thing to fix.The good news is that it’s really simple [6]: the display is accessed via a socket, so we need to bind mount that to the container and we also need so set the host display to the local machine display by passing in the DISPLAY variable [7]:
-v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=${DISPLAY}
While we are here, let’s make one more small alteration that will be useful. As we are running Fedora we also have SELinux to contend with, but that is also pretty easy to deal with as we just need to add a security label. You can see this policy and exactly what it does in the Project Atomic SELinux profile repo [8]:
--security-opt label=type:container_runtime_t
Put all of that together and we end up with a command like so:
$ docker run --rm -d --name goland -v
/home/${USER}/nine-goland:/home/developer -v
/tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=${DISPLAY} --security-opt
label=type:container_runtime_t ninech/goland
If we try this we will find that our application now runs, and we can start developing some Golang tools.
If this was just a docker tutorial this is where we would stop, but because this is atomic and we are living on the bleeding edge we are just getting started!
The atomic command is a great tool to help us control the lifecycle of our applications and to run them without having to remember, or alias, docker commands like the one we used above. The atomic command works with a set of specific LABEL’s that you add to your Dockerfile, which allows it to control important lifecycle stages of your application. This is a very neat pattern as it allows us to package everything we need for our application inside the image and makes it easy for consumers of our image to use it.
Let’s add a new label to the Dockerfile called RUN. Inside it we will paste the docker run command that we made above (minus the --rm option as at this point we wish to persist the container between restarts):
LABEL RUN='docker run -d --name goland -v
/home/${USER}/nine-goland:/home/developer -v
/tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=${DISPLAY} --security-opt
label=type:container_runtime_t ninech/goland'
If we now build the image again the magic can begin
$ docker build -t ninech/goland .
$ atomic run ninech/goland
At this point our application starts, using the command that we added to the RUN label. In the background the atomic tool is now managing this container, and will persist the container and its contents for us.
Atomic also allows us to set INSTALL and UNINSTALL labels, which can perform (usually super privileged) actions to set up the container. These are normally used to install systemd files or create other items needed on the host system for the container to run. Atomic also supports the command `atomic upgrade ninech/goland’ which will pull a new image for us and update the container [9]. In this way, your application container can package everything needed for the entire lifecycle of your application. Pretty, pretty, pretty good! But as I said before we are just getting started...
So we have a container running, it has access to the graphics system and we can run it from a simple atomic command, but it is still using Docker at this point, and our stated aim is to run this application without it.
There are quite a few tools that you can use to replace Docker, none of them replace all aspects of the docker tool, but together they create a very similar experience. For the purpose of this blog post, we will focus on podman/libpod [10] as that is the closest to an ‘all in one’ replacement that currently exists. Project atomic has an excellent blog [11] introducing this tool which I would suggest you read before we continue [12].
Podman is a great alternative to docker. You can get it in atomic by running
rpm-ostree install podman
Podman basically covers most of the functionality of docker [13], by pulling together lots of the existing image tools available into a convenient package, but without a daemon [14] . A lot of people running atomic setups simply alias docker to podman.
So now we have podman the rest is really easy, we just need to replace docker with podman in our command
podman run -d --name goland -v
/home/${USER}/nine-goland:/home/developer -v
/tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=${DISPLAY} --security-opt
label=type:container_runtime_t ninech/goland
That is it, everything should now be working, and without a daemon, so you can safely restart your docker process and keep your IDE up and running, give it a try, it’s surprisingly satisfying!
A word of warning: if you use the above command as your atomic run label it will pull the image twice, once via the docker engine for atomic to recognise it, and again once you start it with podman. This will hopefully change once the atomic command uses podman as the default container engine.
One thing that we did not cover in this article is getting the goland debugging tools to work. Internally these use delve and for that to work we need to allow fork/exec to work in the container, by far the easiest way of dealing with this is to just allow the container to run as unconfined as follows:
podman run -v /tmp/.X11-unix:/tmp/.X11-unix -v
${HOME}:/home/developer-e DISPLAY=${DISPLAY} -v
/var/run/docker.sock:/var/run/docker.sock --security-opt
label=type:container_runtime_t --security-opt seccomp=unconfined
ninech/goland:latest
Obviously this sacrifices all security for ease if use, which is only appropriate in an environment that you have total control over, but it was difficult to find a way around this in my trials, if you have a better solution it would be great to hear it [15].
This article has just covered the basics of getting a development IDE container running. You will probably want to add additional dependencies and convenience installs for tools that you regularly use, but the good news is that you now have a flexible container that will work for all intellij products, can run without a daemon and which allows you to bake in the common tools that you use without touching your base system.
Building software in this way clearly exposes the dependencies that it has, giving you a clearer picture of what is really needed to run an application, and shows that you can leverage containers to run any workload, not just server-side application.
In the next, and final, part of this series I will take a look at buildah for creating images without a daemon, upgrading your operating system by rebasing and summarize some of the issues and lessons learned from using atomic as my main operating system.
Read here the first part of the series
«Dockerising my Workstation with Atomic: Part 1»