Kicking The Tires On Fathom

Kicking the tires on Fathom (https://usefathom.com/)

vados

8 minute read

Fathom logo + Kubernetes logo

tl;dr - I set up Fathom for an application running in my small kubernetes cluster. It was easy but required a little hackery to properly init fathom (in particular creating the root user)

Recently I came across Fathom (usefathom/fathom on github) thanks to restoreprivacy.com’s google-alternatives page. They also got posted on Hacker News, which was cool to see.

Up until now I’ve been using Matomo (formerly Piwik) for my website analytics (for example on this blog) – it’s got a bucketload of features and is relatively easy to setup along with having some good defaults. It can also track multiple sites, which is a nice feature so I can have a (kubernetes) cluster-wide instance running and do basic page load metrics for lots of sites at the same time. While Matomo is a fantastic tool, I’ve always wondered if I could make a much simpler tool that just relied on SQLite or even just a plain append-only log for it’s database. I am very often only concerned with page views (mostly because I don’t know better) and a few metrics (maybe browser feature set as well), and kind of felt that Piwik had more features than I needed. In my mind, I wanted to write a small, quick-and-dirty Golang HTTP based service that could do a super small subset of what Matomo does. Of course to my surprise, someone had already done it, and Fathom looks prety good.

In this post I’m going to deploy it alongside the Deployment that’s running techjobs.tokyo, and use it to monitor page clicks and visitors on the site. The deployment will be pretty simple, but I’ll be integrating it with my previously discussed Makefile driven infrastructure repos so that whenever I deploy techjobs.tokyo fathom will come up with the website automatically.

Step 0: RTFM

Of course, before I start I’ll need to at least familiarize myself with the Fathom documentation.

Along with Fathom, if you’re unfamiliar with Kubernetes, get a look at the Kubernetes Documentation and maybe at some of my previous posts. Even if you’re unfamiliar with Kubernetes, the YAML that comes out of generating should be pretty easy to read and understand if you’ve looked at anythign like Docker Compose configs before.

Step 1: Making the Kubernetes Resources

So now that I’ve got a good idea of how to work with Fathom, it’s time to make the Kubernetes resources that will support the intallation. Off the top of my head I should need:

  • A ConfigMap to hold the configuration that Fathom expects
  • A Deployment to actually run Fathom
  • An Service to handle internal traffic to the fathom instance
  • An Ingress to handle external traffic to the fathom instance (from the actual site on the WWW)

Since I’m going to standup Fathom right next to the application, it will go into the -infra folder that I have set up for the techjobs.tokyo project.

Here’s what the resource definitions ended up looking like and some assorted notes (make sure to replace your-app* if you see it):

fathom.configmap.yaml.pre:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: fathom-config
  namespace: your-app-namespace
data:
  fathom.env: |
    FATHOM_DEBUG=true
    FATHOM_DATABASE_DRIVER="sqlite3"
    FATHOM_DATABASE_NAME="/var/data/fathom.sqlite"
    FATHOM_DATABASE_USER=""
    FATHOM_DATABASE_PASSWORD=""
    FATHOM_DATABASE_HOST=""
    FATHOM_DATABASE_SSLMODE=""
    FATHOM_SECRET="${FATHOM_SECRET}"

NOTE: this yaml file is actually a yaml.pre file, this means that in my Makefile I use envsubst to fill in the FATHOM_SECRET variable you see there. I use git-crypt along with my PGP key to encrypt and store my secrets right alongside the infrastructure code to keep things easily automatable and secure. I’ve written about the pattern before so I won’t go into it here.

Here’s what the Makefile target that builds the file looks like:

generated:
    mkdir -p generated

fathom-configmap-yaml: generated
    export FATHOM_SECRET=`cat ../secrets/fathom/fathom.secret` && \
    envsubst < fathom.configmap.yaml.pre > generated/fathom.configmap.yaml

If you’re unfamiliar with Kubernetes, the ConfigMap here is used to populate files in a folder (when it’s mounted in the Deployment which contains the Pod), so this configmap basically indirectly contains the contents of the fathom config file.

fathom.deployment.yaml:

---
apiVersion: apps/v1beta1
kind: Deployment
meta:
  name: fathom
  namespace: your-app-namespace
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: fathom
        tier: admin
        env: prod
    spec:
      containers:
      - name: fathom
        image: usefathom/fathom:latest
        imagePullPolicy: IfNotPresent
        ports:
          - containerPort: 8080
        volumeMounts:
          - name: data
            path: /var/data/fathom
          - name: config
            path: /etc/fathom
      volumes:
        - name: data
          emptyDir: {}
        - name: config
          configMap:
            name: fathom-config

NOTE - I’m actually using an emptyDir volume here for the data because I’m not particularly worried about keeping the data long term! If you want to keep the data for a longer period (enduring pod crashes), use a different volume type.

NOTE - while the above works, configuing fathom properly and creating users as needed requires a little more hackery, see the HICCUP section below this one

fathom.svc.yaml:

---
apiVersion: v1
kind: Service
metadata:
  name: fathom
  namespace: your-app-namespace
  labels:
    app: your-app
    tier: admin
spec:
  selector:
    app: your-app
    tier: admin
    env: prod
  ports:
    - name: fathom
      protocol: TCP
      port: 8080

fathom.ingress.yaml:

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: fathom
  namespace: your-app-namespace
  annotations:
    ingress.kubernetes.io/ssl-redirect: "true"
    ingress.kubernetes.io/limit-rps: "20"
    ingress.kubernetes.io/proxy-body-size: "10m"
    kubernetes.io/tls-acme: "true"
    kubernetes.io/ingress.class: "traefik"
spec:
  tls:
  - hosts:
    - "path.to.fathom.on.your.domain.tld"
    secretName: your-app-fathom-tls
  rules:
  - host: "path.to.fathom.on.your.domain.tld"
    http:
      paths:
      - path: "/"
        backend:
          serviceName: fathom
          servicePort: 8080

NOTE if you are using a non-automatically configured nameserver, you’re going to have to go and ensure that browsers can actually find fathom.<your app>.tld. You may also have some ingress clashes, and might have to make sure the secretName is different (or merge this ingress with another one), especially if you’re using a tool like jetstack/cert-manager to do automatic cert generation.

ONE MORE THING – If you’re using jetstack/cert-manager and Traefik for your Kubernetes ingress controller, you need to ensure that the automatic http->https redirect is NOT enabled. So the following lines should NOT be present in your traefik config:

        [entryPoints.http.redirect]
                entryPoint = "https"

Don’t fret though, because if the ingress.kubernetes.io/ssl-redirect: "true" annotation is set on the ingress, it will redirect to SSL after – so it’s not that you will be stuck exposing the endpoint with HTTP forever. Let’s encrypt will create a pod that that has a service and an ingress which has the /.well-known/<GIBBERISH> endpoint, use http01 validation, and then your ingress will redirect everything to HTTPS.

Hiccup: Configuring & Initializing Fathom properly

One thing I ran into was that I needed to initialize Fathom with a user before I could use it. Unfortunately, Fathom currently doesn’t support configuring a root user straight from configuration – you have to shell into the container and use the CLI tool (fathom register). I wrote a ticket about it, but I have no idea how they will respond so I’m not sure if they’ll be a fan of the approach.

For now, this means I have to solve the problem in a somewhat hacky manner. My first thought was kubernetes init containers, but they run before the containers, and since I’m using an emptyDir, I can’t influence the DB (that doesn’t even exist beforehand anyway) – If I was using a real volume maybe I could mount the volume in the init container then do the registration, but that doesn’t look like an option right now.

Another thought would be to use a Job, but the emptyDir and the fact that I don’t have a way to persist state right now seems to invalidate that idea.

I’m opting for the hacky solution here (please shoot me an email if you have a better idea!) – I’m going to change the Pod command and arguments inside the fathom.deployment.yaml resource and splice in a delayed fathom register call with a pre-chosen password (so this would make it fathom.deployment.yaml.pre file). This is what the hack looks like:

---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: fathom
  namespace: your-app-namespace
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: your-app
        tier: admin
        env: prod
    spec:
      containers:
      - name: fathom
        image: usefathom/fathom:latest
        command:
          - /bin/sh
          - -c
          - |
            (sleep 20; /app/fathom --config=/etc/fathom/fathom.env register --email=${FATHOM_ROOT_USER_NAME} --password=${FATHOM_ROOT_USER_PASSWORD}) &
            /app/fathom --config=/etc/fathom/fathom.env server
        imagePullPolicy: IfNotPresent
        ports:
          - containerPort: 8080
        volumeMounts:
          - mountPath: /etc/fathom
            name: config
          - mountPath: /var/data/fathom
            name: data
      volumes:
        - name: data
          emptyDir: {}
        - name: config
          configMap:
            name: fathom-config

I’m not a bash master, so I got lots of help from SO to figure out that I needed ( ... ) to group properly.

There’s a lot to be disliked about how hacky this solution is, but for now I’m going to ignore those feelings. Those of you who might be trying to use this with a persistent volume, be careful – I’m not sure if fathom register will work twice if the DB and user already exists or if it will crash with an error or do other funny stuff.

Here’s what the log from fathom looks like (I added the note about the 20 second wait):

$ k logs -f fathom-7bdf74cb94-5wmnw -n your-app-namespace
time="2018-08-13T05:55:59Z" level=info msg="Fathom 1.0.0"
time="2018-08-13T05:55:59Z" level=info msg="Configuration file: /etc/fathom/fathom.env"
time="2018-08-13T05:55:59Z" level=info msg="Connected to sqlite3 database: /var/data/fathom/fathom.sqlite?_loc=auto"
time="2018-08-13T05:55:59Z" level=info msg="Applied 4 database migrations!"
time="2018-08-13T05:55:59Z" level=info msg="Server is now listening on :8080"
---- 20 seconds elapses ----
time="2018-08-13T05:56:19Z" level=info msg="Fathom 1.0.0"
time="2018-08-13T05:56:19Z" level=info msg="Configuration file: /etc/fathom/fathom.env"
time="2018-08-13T05:56:19Z" level=info msg="Connected to sqlite3 database: /var/data/fathom/fathom.sqlite?_loc=auto"
time="2018-08-13T05:56:19Z" level=info msg="Created user admin@techjobs.tokyo"

After logging in, here’s what I was greeted with:

fathom main page

Future work

As you can see it’s pretty easy to get started with a simple deployment of Fathom – this is basically identical to just about any single-container Kubernetes Deployment that needs an Ingress to go with it.

It would be really cool to make a Kubernetes Operator (CRD + custom controller to watch for and manipulate the CRDs) out of this. I could probably get up and started really quickly if I used something like CoreOS’s Operator framework, but I really want to build one from near scratch so that I can feel out the realities of developing kubernetes-internal programs as much as I can. Either way, it’d take more than a few hours so I’ll leave it for another day.

Wrapup

Though things got a bit hacky (as they tend to), it was all in all pretty fun and easy to set up Fathom. It’s just the kind of simple tool that I love to run, kudos to the Fathom contributors for making such an awesome easy-to-use tool!

Did you find this read beneficial? Send me questions/comments/clarifciations.
Want my expertise on your team/project? Send me interesting opportunities!