Securing Your Kubernetes Cluster

Good resources I've stumbled upon discussing securing your Kubernetes cluster(s)


8 minute read

tl;dr - Check out Kubernetes features like PodSecurityPolicy, NetworkPolicy. There are also Fantastic fun analogy-laden Talks from Kubecon 2017 (Austin) and Kubecon 2018 (Copenhagen). CIS standards for Kubernetes clusters exist. There are also companies like Aqua that produce tools like kube-bench that let you test your clusters CIS benchmarks. It’s also important to remember to secure the machine as well as the Kubernetes cluster – so the usual Unix server administration advice…

Even faster rust builds in Gitlab CI

Getting even faster builds out of rust on Gitlab CI


9 minute read

tl;dr - I applied a few patterns I’ve used on other projects to a Gitlab CI-powered rust project to achieve <2min builds. Basically just caching at different layers – caching via the docker image builder pattern at the docker level, aggressive caching with Gitlab CI at the CI runner level, also one more step of combining some build steps (probably unnecessarily).

k8s Container Linux ignition with rkt and kube-router

Modifying a recently created Container Linux ignition configuration to use rkt and kube-router (instead of containerd and canal).


7 minute read

I recently wrote a post about about switching back to container linux for my small Kubernetes cluster, in which I outlined everything i needed to do to get it up and running. Even more recently, I decided I wanted to go ahead and run “rktnetes” to try and take advantage of it’s annotation-powered stage1 selection, and figured I should post that up too for any fellow rkt enthusiasts!

Yet another cluster re-install after switching back to Container Linux

Switching back to Container Linux, and using kubeadm + ignition to install Kubernetes.


13 minute read

tl;dr - After a bad kernel upgrade (pacman -sYu) on my Arch-powered server I decided to go back to Container Linux, after being equal parts annoyed by Arch and encouraged by the Press release put out by red hat. This time, I spent much more time with the Ignition config files in conjunction with kubeadm and ended up with a bootable master node. Feel free to check out the tldr at the end.

Minimal effort build improvements and a GHC 8.2.2 upgrade

How attempting to speed up my CI builds led to upgrading to GHC 8.2.2 (and eventually speeding up my CI builds)


19 minute read

tl;dr - On a Haskell project I’m working on I started with >~20 minute cold-cache builds in the worst case in my Gitlab-powered CI environment then found some small ways to improve. Very recently I decided I wasn’t satisfied with ~10 / 15 minute builds and did the laziest, least-effort steps I could find to get to <10 minute cold-cache builds (~5min best case). Check out the [TLDR][tldr] section to see the Dockerfiles and steps I took summarized.

Better k8s monitoring part 3: Adding request tracing with OpenTracing (powered by Jaeger)

Deploying Jaeger to enable tracing support for an application on my kubernetes cluster


61 minute read

tl;dr - I spent a bunch of time stumbling through getting kim/opentracing integrated into my small Servant powered web app. In the end I actually switched to servant-tracing due to some issues integrating, and was able to get it working – there’s a TON of wandering in this post (basically half the time you’re reading an approximation of my stream of consciousness, some might consider the experiments with kim/opentracing a waste of time, but I do not), so please check out the…