Serving a HTTPS enabled application on Kubernetes

tl;dr - It’s pretty easy if you have let’s encrypt certificates set up, and Kubernetes Ingress/DNS working properly (I’ve covered how I set these up in previous posts so check them out for reference). Skim through to see the final Kubernetes resource configuration that I use in production for Passcue.me

So far we’ve gone through a lot of Kubernetes related posts, from setting up Kubernetes manually on a single machine, to getting regular non-authenticated HTTP apps running on Kubernetes, to setting up a database on kubernetes and setting up letsencrypt-powered TLS certificates. To finish the path to running a full “production ready” application on Kubernetes, the next step is to get a fully HTTPS powered app running, with a database in the background.

In this post, I’m going to be going through what I did to port Passcue (all the code is open source on Gitlab) on my small Kubernetes cluster.

Step 0: Set up DNS and NGINX Ingress

I use Kubernetes DNS in my set up, as well as an NGINX ingress controller, so for me installing those things was a strict requirement. You may already have a cluster set up with DNS and an ingress option, but if you don’t, check out the previous posts on setting up ingress and setting up DNS.

You’ll also want to go to whatever DNS service provider you use (I use and very much like Gandi.net) to point whatever domain names at your kubernetes cluster once you’re done setting everything up.

Step 0.1: Install Certificates

An obvious pre-requisite for TLS, you need to obtain signed certificates to use for your site. Check out the previous blog post for how I did it using ployst/docker-letsencrypt or maybe take a crack at kube-lego which looks pretty awesome and fully automated.

Step 0.2: Make a plan

Thinking about what it would take to port Passcue before starting was important, thinking through and putting together the concepts that needed to interact in my mind helped to outline the general list of tasks I needed to accomplish. Here’s what I thought I’d have to do:

  1. Set up the database (prepare the RethinkDB cluster), particularly running migration(s)
  2. Start pushing containers to Gitlab using their awesome container registry feature.
  3. Port the actual application
  4. Create the Kubernetes resource configurations that will pull the images from gitlab and start the front and backend of the app
  5. Add the ansible configuration that should make this process more automated, using the kubernetes module.

Step 1: Set up the required database

This step is pretty easy, given that it’s already covered in my previous blog post on how I set up RethinkDB on the cluster – the RethinkDB cluster is up and running smoothly, and the master node is accessible at the DNS name rethinkdb-master.

One step that took a bit of time was stepping back into the Passcue codebase and touching Clojure code for the first time in a while. Lisp is amazing, Clojure is an awesome lisp on the JVM and it was like riding a bike getting back into that codebase. Unfortunately, I found that I was doing a bunch of things slightly wrong/not as easily as I could have been, so I had to do some rewriting of my utility scripts – the database migration scripts in particular. Here’s what the code looks like now:

(use '[leiningen.exec :only (deps)])
(deps '[[com.apa512/rethinkdb "0.15.26"]
        [passcue "0.1.0"]
        [org.clojure/tools.logging "0.4.0"]])

(require '[rethinkdb.query :as r]
         '[passcue.api.v1.db :as p-db]
         '[passcue.api.v1.api :as p-api]
         '[clojure.tools.logging :refer [info]])

(defn ensure-table-in-db
  "Ensures that an empty table \"table-name\" exists"
  [db-name table-name optargs conn]
  (let [existing-tables (r/run (r/table-list) conn) ]
    (if (some #{table-name} existing-tables)
      (info "Table [" table-name "] already exists... ignoring")
      (-> (r/db db-name)
          (r/table-create table-name optargs)
          (r/run conn)))))

(defn ensure-db
  "Ensures that a database with name \"db-name\" exists"
  [db-name optargs conn]
  (let [existing-dbs (r/run (r/db-list) conn)]
    (if (some #{db-name} existing-dbs)
      (info "Database [" db-name "] already exists... ignoring")
      (-> (r/db-create db-name)
          (r/run conn)))))

;; Establish DB connection
(with-open [conn (r/connect :host "127.0.0.1" :port 28015 :db "passcue")]
  ;; Add "passcue" DB
  (ensure-db "passcue" nil conn)

  ;; Add relevant tables
  (ensure-table-in-db "passcue" "users" nil conn)
  (ensure-table-in-db "passcue" "api_keys" {:primary-key "key"} conn))

The issue I was facing getting back into the codebase was that I couldn’t quite run the database migration script as a script. I assume in the past I actually got into the full environment to run the code, but I should have been able to just lein run the script. The bit I needed to add (that I didn’t have before) was the deps specification at the top, ensuring that necessary dependencies would be downloaded when the file was run. Of course, to have this code work properly I needed to also kubectl port-forward to the database cluster to make sure that 127.0.0.1:28015 would get point there.

I found this stack overflow post on leiningen deps super useful.

Step 2: Start pushing containers to Gitlab

This step is covered under the previous post on serving HTTP applications, but the idea here is to add a private registry to Kubernetes for Gitlab.

Step 3 & 4: Port the actual application, and make the Kubernetes resource configuration

For Passcue, the front end code and backend code both exist in separate repositories, generate different containers, and run independent of each other (though of course they need to be linked under the same hostname + TLD (i.e. passcue.me), so porting the application means making a Kubernetes Deployment that stitches them together (either way, they’d be in the same pod as well).

At a high level, the resource configuration is going to need:

  • A deployment with 2 containers
  • A service to make the front and backends accessible
  • A host-based ingress to make both front and backend reachable from the internet

Here’s what the configuration looks like:

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: passcue-ing
  annotations:
    ingress.kubernetes.io/class: "nginx"
    ingress.kubernetes.io/ssl-redirect: "true"
    ingress.kubernetes.io/limit-rps: "20"
spec:
  tls:
  - hosts:
    - "passcue.me"
    - "mail.passcue.me"
    secretName: letsencrypt-certs-all
  rules:
  - host: "passcue.me"
    http:
      paths:
      - path: "/.well-known/acme-challenge"
        backend:
          serviceName: letsencrypt-helper-svc
          servicePort: 80
      - path: "/"
        backend:
          serviceName: passcue-svc
          servicePort: 80
      - path: "/api"
        backend:
          serviceName: passcue-svc
          servicePort: 4000
  - host: "getpasscue.com"
    http:
      paths:
      - path: "/.well-known/acme-challenge"
        backend:
          serviceName: letsencrypt-helper-svc
          servicePort: 80
      - path: "/"
        backend:
          serviceName: passcue-svc
          servicePort: 80
      - path: "/api"
        backend:
          serviceName: passcue-svc
          servicePort: 4000

---
apiVersion: v1
kind: Service
metadata:
  name: passcue-svc
  labels:
    app: passcue
spec:
  type: LoadBalancer
  selector:
    app: passcue
  ports:
    - name: http
      protocol: TCP
      port: 80
    - name: api
      protocol: TCP
      port: 4000

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: passcue-deployment
  labels:
    app: passcue
spec:
  replicas: 1
  template:
    metadata:
      labels:
        env: prod
        app: passcue
    spec:
      imagePullSecrets:
        - name: gitlab-private-registry
      containers:
      - name: frontend
        image: registry.gitlab.com/path/to/the/passcue-frontend:1.0
        imagePullPolicy: Always
        ports:
          - containerPort: 80
      - name: backend
        image: registry.gitlab.com/path/to/the/passcue-backend:0.1.0
        imagePullPolicy: Always
        env:
          - name: PASSCUE_DB_HOST
            value: rethinkdb-master
        ports:
          - containerPort: 4000

Note that the backend is listed @ at /api while the frontend listens at /. Frontend/backend separation that often takes some more wrangling of NGINX to accomplish is just a few lines of declaractive configuration with Kubernetes (this is a big win for me).

If you’re app is 12 Factor-ish, you’ll find it easy to configure them with just ENV variables, PASSCUE_DB_HOST is super easy to specify (as the proper DNS name) which DB to connect to.

At this point I was able to check if everything was working by looking at the kubectl logs for the pods as well as visiting the site in my browser and logging in.

One thing to watch out for is to make sure that you use the RIGHT TLS credentials. If you don’t specify the tls configuration properly, it will fall back to using ingress.local certs, and you’ll get an error in your browser about the certs not being for the right site. I found the cafe ingress example in nginxinc/kuberenetes-ingress repo to be very useful. Of course, if you use the code above as a base you likely won’t run into this issue at all.

Step 5: Add Ansible configuration to help automate the process

Right now the kubernetes module on my system isn’t set up quite properly (due to some awkwardness setting up username/password auth on the Kubernetes cluster), so I’m still using kubectl manually (with certs specified in kubeconfig), but here’s what I thought the ansible tasks should look like.

---
- name: install gitlab registry secret
  kubernetes:
    api_endpoint: "{{k8s_master_ip}}"
    url_username: "{{k8s_api_user}}"
    url_password: "{{k8s_api_pass}}"
    file_reference: ../kubernetes/pods/gitlab-registry-secret.yaml

- name: install pod & services for vadosware blog
  kubernetes:
    api_endpoint: "{{k8s_master_ip}}"
    url_username: "{{k8s_api_user}}"
    url_password: "{{k8s_api_pass}}"
    file_reference: {{../kubernetes/pods/passcue.yaml | realpath }}

I’m not 100% sure on teh realpath filter usage but it should be right??

Wrapping up

Thanks to a lot of previous work/steps much of this was pretty easy to put together. Now I’ve got a full application (check it out at passcue.me) running on Kubernetes! It’s bene a long road but supremely rewarding.