Securing Your Kubernetes Cluster

Good resources I've stumbled upon discussing securing your Kubernetes cluster(s)

vados

8 minute read

tl;dr - Check out Kubernetes features like PodSecurityPolicy, NetworkPolicy. There are also Fantastic fun analogy-laden Talks from Kubecon 2017 (Austin) and Kubecon 2018 (Copenhagen). CIS standards for Kubernetes clusters exist. There are also companies like Aqua that produce tools like kube-bench that let you test your clusters CIS benchmarks. It’s also important to remember to secure the machine as well as the Kubernetes cluster – so the usual Unix server administration advice…

k8s Container Linux ignition with rkt and kube-router

Modifying a recently created Container Linux ignition configuration to use rkt and kube-router (instead of containerd and canal).

vados

7 minute read

I recently wrote a post about about switching back to container linux for my small Kubernetes cluster, in which I outlined everything i needed to do to get it up and running. Even more recently, I decided I wanted to go ahead and run “rktnetes” to try and take advantage of it’s annotation-powered stage1 selection, and figured I should post that up too for any fellow rkt enthusiasts!

Yet another cluster re-install after switching back to Container Linux

Switching back to Container Linux, and using kubeadm + ignition to install Kubernetes.

vados

13 minute read

tl;dr - After a bad kernel upgrade (pacman -sYu) on my Arch-powered server I decided to go back to Container Linux, after being equal parts annoyed by Arch and encouraged by the Press relesae put out by red hat. This time, I spent much more time with the Ignition config files in conjunction with kubeadm and ended up with a bootable master node. Feel free to check out the tldr at the end.

Minimal effort build improvements and a GHC 8.2.2 upgrade

How attempting to speed up my CI builds led to upgrading to GHC 8.2.2 (and eventually speeding up my CI builds)

vados

19 minute read

tl;dr - On a Haskell project I’m working on I started with >~20 minute cold-cache builds in the worst case in my Gitlab-powered CI environment then found some small ways to improve. Very recently I decided I wasn’t satisfied with ~10 / 15 minute builds and did the laziest, least-effort steps I could find to get to <10 minute cold-cache builds (~5min best case). Check out the [TLDR][tldr] section to see the Dockerfiles and steps I took summarized.

Better k8s monitoring part 3: Adding request tracing with OpenTracing (powered by Jaeger)

Deploying Jaeger to enable tracing support for an application on my kubernetes cluster

vados

61 minute read

tl;dr - I spent a bunch of time stumbling through getting kim/opentracing integrated into my small Servant powered web app. In the end I actually switched to servant-tracing due to some issues integrating, and was able to get it working – there’s a TON of wandering in this post (basically half the time you’re reading an approximation of my stream of consciousness, some might consider the experiments with kim/opentracing a waste of time, but I do not), so please check out the…

Switching From kube-lego To cert-manager

Switching from kube-lego to cert-manager (without Helm)

vados

16 minute read

tl;dr - I switched from Jetstack’s kube-lego to cert-manager (it’s natural successor), and am pretty happy with the operator pattern they’ve decided to adopt, switch over was easy, but I tripped myself up for a bit because I don’t like using Helm. Complete resource definitions (that worked for me, YMMV) are in the TLDR section @ the bottom.