Disassembling Raid on Hetzner Without Rescue Mode
tl;dr - Disassembling the default-installed RAID1 on Hetzner dedicated servers so you can give one drive to Rook (Ceph underneath) to manage is doable without going into Hetzner rescue mode if you just shrink the cluster to one drive (credit to user forstschutz on StackOverflow), then remove the second. I’m a huge fan of Hetzner dedicated servers and in particular their Robot Marketplace. Long story short, discovering the robot marketplace thanks to someone on HN opened my eyes to the world of affordable dedicated servers (I’ve also written about hetzner in some previous posts).
Using Gitlab Deploy Tokens With k8s
tl;dr - Gitlab deploy tokens in a Kubernetes deployment don’t work using the normal k8s private registry documentation instructions. This post lays out the workaround/hack I used the last time it came up to save people some time. Skim through for the process and to the end for the k8s YAML. One thing I just recently (literally a few moments ago) found myself doing was trying to figure out deploy tokens for a new project that I’m rolling out.
Up and Running With Linkerd v1
tl;dr - I got linkerd v1 working on a small 3-tier app on my k8s cluster. Linkerd v1 sports an older (but much more battle-tested) and simpler model in that it runs a proxy on every node as a DaemonSet. Linkerd v2 runs a set of control programs (the “control plane”) and per-application sidecar containers that act as proxies, and while that’s cool I’m not going to run it just yet.
Running Untrusted Workloads K8s Container Linux Part 3
tl;dr - After struggling through settting up containerd’s untrusted workload runtime, building a static kata-runtime and a neutered-but-static qemu-system-x86_64 to use, I succeeded in hooking up containerd to use kata-runtime only to fail @ the last step since the pods that were created ran qemu properly but couldn’t be communicated with and would immediately make k8s node they were running on move to the NotReady due to PLEG errors.
Running Untrusted Workloads K8s Container Linux Part 2
tl;dr - I came across rkt’s ability to use alternate stage 1s, got it working, but then abandoned it due to problems getting rook running and a lack of CRI compatability (at the time), before even trying to compare with the QEMU-in-a-pod approach. These notes are very old (I don’t use container linux for my cluster anymore) and I can’t believe I quit so quickly without more thorough investigation but evidently I did so there’s not much to see in this post, but maybe it will serve as a starting point for others.
Running Untrusted Workloads K8s Container Linux Part 1
tl;dr - I kinda succeeded in getting simplistic VM level isolation working on a container linux powered Kubernetes cluster with lots of failures along the way. This post is cobbled-together notes from the exploration stage, which ultimately lead to an extremely hackish CoreOS VM powered by qemu running inside a privileged Kubernetes pod running on top of a CoreOS dedicated machine. The notes that were cobbled together to make this post are very old, I’ve actually already switched to Ubuntu server for my kubernetes cluster, but I figured it was worth editing and releasing these notes for anyone interested that is experimenting with coreos container linux or flatcar linux.
Hetzner fresh Ubuntu (18.04 LTS) install to single node Kubernetes cluster with ansible
tl;dr - I installed Kubernetes on Ubuntu 18.04 LTS via Ansible (kubeadm under the covers) on a Hetzner dedicated server. Before doing so, I debugged/tested the playbook in a local VirtualBox VM with a fresh Ubuntu install before attempting on the dedicated hardware. There’s a gitlab repo (ansible-hetzner-ubuntu-1804-k8s-setup) that contains a copy-paste job of the finished work – the idea is that you should be able to run that playbook and go from a fresh Hetzner dedicated Ubuntu 18.
Securing Your Kubernetes Cluster
tl;dr - Check out Kubernetes features like PodSecurityPolicy, NetworkPolicy. There are also Fantastic fun analogy-laden Talks from Kubecon 2017 (Austin) and Kubecon 2018 (Copenhagen). CIS standards for Kubernetes clusters exist. There are also companies like Aqua that produce tools like kube-bench that let you test your clusters CIS benchmarks. It’s also important to remember to secure the machine as well as the Kubernetes cluster – so the usual Unix server administration advice applies.
Using Makefiles And Envsubst As An Alternative To Helm And Ksonnet
tl;dr - Why don’t we use Makefiles in <project>-infra repos, git-crypt, and good naming conventions instead of Helm UPDATE (06/13/2018) After some much needed prodding from some readers that sent emails, I’ve created an example repo to more fully showcase the pattern! You can find the example repo (mrman/makeinfra-pattern) on Gitlab. Check it out and make Merge Requests with any suggestions, discussion, and improvements you can think of!
k8s Container Linux ignition with rkt and kube-router
I recently wrote a post about about switching back to container linux for my small Kubernetes cluster, in which I outlined everything i needed to do to get it up and running. Even more recently, I decided I wanted to go ahead and run “rktnetes” to try and take advantage of it’s annotation-powered stage1 selection, and figured I should post that up too for any fellow rkt enthusiasts!