Some random notes on getting docker, nginx, your firewall and postres to play nicely (if they aren’t)
This post is one I wrote a while back (as a follow up to the KVM post) but never published, as I’ve found some free time now, I’m going through and putting together posts that I jotted down but never got to publishing.
After setting up my KVM-enabled VPS with Arch Linux, it came time to start moving over the deployments of my applications. During this time however, I ran into some issues that I didn’t expect, so I took some notes, which you might benefit from.
Testing/Porting NGINX configuration
It’s probably a good idea to test the NGINX configurations on the new server while the old server handles all the normal connections. This seems obvious, but just expecting things to work I took the old server down prematurely, only to run into some issues with NGINX configuration. You can even use the same SSL certs from the old server, though you’ll receive errors of course.
After ensuring that the app is functioning properly (no 502 bad gateway errors), you can switch the DNS and wait for the changes to propogate.
nginx -t is your friend
Testing DB connectivity
I chose to not run my DB in a container (there are of course, lots of sources that both suggest and warn against that practice), but I did not realize that if I went down that route, I would need to ensure that my database played nicely with all the apps running in the docker containers. I use PostgreSQL for a bunch of apps, and in this case, this meant modifying
postgresql.conf to accept connections from applications running behind docker.
Obviously, it’s a good idea to test DB connectivity before trying to push the app live, as all you’ll get is a broken app. In this case, what I needed to do was:
Ensure that the IPs that postgres listens on included the IP that docker assigns itself on my machine (in my case
172.17.0.1). Another way to approach the problem would also be exposing a socket and patching it into containers when they start up (but I didn’t explore that option).
pg_hba.conf also determines which IPs are allowed to connect to
postgres. I needed to modify it to ensure that a IP range mask that covered all the IPs that would be created by docker would also be able to connect to the database.
Of course, after dealing with this, things felt quite a bit brittle, and I wondered if the best thing was actually just to dockerize the database process, but patch in the actual data folder for the database – then you could use inside-docker DNS to more easily refer to the database, and also be more active in thinking of where your database contents are actually stored (and setting up some backups).
Another disadvantage of this setup (allowing docker hosts to connect to a DB instance running on the machine) is that you must pass the docker host IP into the container when you start it. So in all:
Punch a hole in PostgreSQL’s config (
postgresql.conf) to allow postgres to LISTEN on the docker host IP (in my case
Punch a hole in PostgreSQL’s config (
pg_hba.conf) to allow docker-created IPs to connect to postgres with the appropriate wildcard mask (my wildcard was
Pass the host IP for docker into the containers that start
Thinking about it now, the easiest option would have probably been to bind the containers’ postgresql port to the host…
NOTE You might also want to ensure that your database runs AFTER docker (or restarts itself after docker at least) so that it can bind to the address properly (if docker hasn’t started the IP
172.17.0.1 won’t exist..). If you’re using SystemD this can be achieved by adding the following to
For more information check out the
systemd overriding rules
Test DNS change with ping/dig
DNS changes can take a long time to propogate, use a tool like
dig or even simple
ping (from another host) to see when things have changed.
Check your firewall
Due to the way I have postgresql talking to my docker containers, I also needed to make some firewall changes. I use
ufw and it’s pretty amazing. What I needed to do was add some rules to allow traffic from docker to reach my database.
So of course, make sure your firewall is enabled, started, and configured to allow traffic on the needed ports from docker containers (
So many of these steps were probably unnecessary (most likely the scheme I employed to connect the DB to docker), but it did yield a working setup, and I haven’t had to think about the moving bits in a while (though it is always a pain that I have to pass in the docker IP and deploy it).