Awesome FOSS Logo
Discover awesome open source software
Launched 🚀🧑‍🚀

From zero to kvm docker application

Categories

From zero to reasonably secure dockerized appliation running on a KVM VPS

This post is one I’ve had written for a while but never published, as I’ve found some free time now, I’m going through and putting together posts that I jotted down but never got to publishing. This one was written when I ran into issues goign from an OpenVZ powered VPS to a KVM one, since I wanted to use docker to simplify my deployments a little bit (why I did so is a whole nother discussion, for another time).

0. Get a KVM VPS, not an OpenVZ one

  • Or maybe get an OpenVZ one if the kernel is new enough for you
  • The main difference (in my case) between OpenVZ and KVM for me was that OpenVZ runs on a shared kernel whereas KVM does not. Due to my provider’s OpenVZ kernel being too old, I was forced to go with KVM.

1. Put the CD in the drive… in the cloud

  • Exactly what it sounds like, if you’re going to start from an clean install, you gotta put the CD in the drive, in the cloud. How to do this depends on the specific cloud vendor, but luckily enough, mine (INIZ) makes it pretty easy.

2. Install Arch Linux (or some other linux distro)

3. Set up fail2ban

  • If you’ve never heard of fail2ban, check it out
  • All those pesky people who will try to touch your box

4. Set up UFW (or do your iptables config yourself)

  • If you’ve never heard of UFW, check it out
  • Default deny all, poke holes for SSH. For extra security while setting up, only allow SSH to come in from your current IP.

5. Setup SSH

  • The sooner you can switch from password auth to using .ssh/authorized_keys, the better.
  • Set up sshd (On arch, often by modifying /etc/sshd/sshd_config) to disallow password auth
  • Update .ssh/authorized_keys for root
  • SSH from your computer (avoid closing the connection, until you’re sure all the settings are right)
  • In the case that you do mess up some settings, you can usually fall back to a more direct access method that will allow password login from your cloud provider
  • Setup the actual user you’re going to be using – make sure to set permissions appropriately, not every user needs to be able to sudo or have root permissions.

6. Update Arch, Setup software

  • Update Arch so you have the lastest software
  • Set up database, if not already set up
    • I decided to use a non-dockerized rethinkdb instance, managed by systemd (with custom profile to bind to correct port)
    • Enable with systemd (on arch when you install rethinkdb, you get a systemd unit file created automatically for you)
  • Set up NGINX or your reverse-proxy/web-server of choice
    • Enable with systemd (using automatically created systemd unit file)
    • Poke holes in UFW for WWW (usually a command like ufw allow WWW)
  • To easily test, add a simple NGINX upstream that will represent your actual server, And run that fake upstream with python -m http.server <upstream port> (assumes Python 3)
  • Install Docker
    • Poke holes in UFW for requests to host from docker containers, UFW rule like ufw allow proto tcp from 172.17.0.0/16 to any port 28015

7. Build the image on your development machine

  • There are tons of build tools that you can choose to make your builds easier. Since I’ve never experienced make hell, I’ve chosen to use make for it’s relatively simple syntax near-universal support.
  • The awesome thing about using make is that it’s a tool that is high up enough to just call things like docker, and whatever else you need to do your build). This means I can easily automate my docker building step without too much code, just as I’d run it in a shell.
  • NOTE The configuration that you’ll need at container run-time is sometimesnot exactly straight forward. Note that the virtual network that your docker containers uses to access the host system is likely something like 172.17.0.x.
  • Make a nice and tidy Dockerfile, making sure to put things that change the most at the bottom, and relatively static requirements at the top, for speedy builds.
  • Great debugging tool is to run your container with docker -it <container> /bin/bash, you get to enter containerland and see how everything is laid out before your application would normally start (or use docker exec <container> /bin/bash to connect to a running container and poke around)

8. Transport the image to the deployment machine

9. Write awesome one-line build, transport, and deploy make targets

  • I handled this by splitting up the targets for building the image, transferring it, then running it on the server.

Thoughts about the security of this basic setup

  • If I had to try and list the attack surface of this setup, I’d say:
    • Kernel HTTP handling software
    • Ports 80, 22, opensshd software
    • Your app, and docker software (docker is root, if escape happens user is root on your system)
    • Operational Security, Machine access, etc
  • Give it a think for yourself – am I missing any areas here? what else might be good to tighten up?

As far as actually upgrading the VPS with INIZ, it was extremely easy (which of course is very much in their best interest). Upgrade happened in place, and I was able to simply restart to see the increased cores & memory in /proc/cpuinfo and on top/htop. I’m also growing more and more impressed with systemd every day I use it. Very intuitive system, and very rewarding, once you figure out how to write good Unit files and get the hand of tools like systemctl and journalctl.