This post is one I wrote a while back but never published, as I’ve found some free time now, I’m going through and putting together posts that I jotted down but never got to publishing.
After setting up my new KVM-enabled VPS, getting the database and initial docker setup all taken care of, and running once (manually starting all the services and docker containers), I looked to make the configuration restart-resistant. With systemd
, this is easy of course, since it’s as simple as sudo systemctl enable <service>
, however for docker containers it’s a little trickier. Luckily, we can tap into the power of systemd
and write ourselves a small unit file that will do the hard lifting for us.
PSA: Rather than read this article, you should start with (and maybe end with) this one: https://coreos.com/docs/launching-containers/launching/getting-started-with-systemd/
It’s a much better writeup and written by the devs at coreos. Below, I’m basically going to go through the problems I ran into while working through that guide and getting things working on my own setup.
So you kind of can’t use dynamic variables (ones that might be set by your ~/.bashrc
, or on setup for that matter) in system-level (as opposed to user-level) units. Well, the actual answer is you can, but it’s difficult (as with most things). For me, this boiled down tot he use of /bin/bash -c "<actual command>"
, as suggested by various internet sources and the coreos article that’s mentioned above. A quick internet search should show some of the things people have done to get environmental variables safely into their unit start commands.
The reason I needed to do this, of course, was the fact that I needed to pass the docker host IP through to my containers that were starting. Maybe I should have looked harder for another way to wire things up (I allude to some alternatives in the previous post).
After starting a unit that I thought was properly configured, it seemed that Systemd would immediately run the stop command for that unit. This turned out to be becuase the command itself was exiting. The fix was to change RemainAfterExit
,
Once you can safely systemctl start <service>
for all the relevant docker-run application units, it makes sense to restart the box and make sure everythings running and connected properly when the box finishes restarting. A quick docker ps
should reveal whether the services you expect to be running are running automatically or not.
When I did this step I actually realized that I needed to ensure that the databases started after docker, due to the way I set up the database and docker communication (see the previous post).
Here’s an example of a completed unit file for a service I run, called configr:
[Unit]
Description=Configr.io run in local docker container
Requires=docker.service postgresql.service
After=docker.service postgresql.service
[Service]
TimeoutStartSec=0
ExecStartPre=-/usr/bin/docker kill configr-web-prod
ExecStartPre=-/usr/bin/docker rm configr-web-prod
ExecStart=/bin/bash -c "/usr/bin/docker run --name configr-web-prod -p 127.0.0.1:3333:3333 --env PG_CONFIGR_ADDR=$(ip route | grep docker0 | awk '{print $9}') -d configr-web"
RemainAfterExit=yes
ExecStop=/usr/bin/docker stop configr-web-prod
[Install]
WantedBy=multi-user.target
NOTE - The ExecStart
command is split across two lines
Of course, another benefit of this setup is that you can now basically manage your docker-run services with systemctl
, which if you’re a fan of systemd, is awesome.