Running Untrusted Workloads K8s Container Linux Part 3

Third and final part of my adventures and mistakes in trying to get untrusted workloads running on k8s (with container linux underneath)

52 minute read

tl;dr - After struggling through settting up containerd’s untrusted workload runtime, building a static kata-runtime and a neutered-but-static qemu-system-x86_64 to use, I succeeded in hooking up containerd to use kata-runtime only to fail @ the last step since the pods that were created ran qemu properly but couldn’t be communicated with and would immediately make k8s node they were running on move to the NotReady due to PLEG errors. I did a lot of work to partially succeed (if you…

Running Untrusted Workloads K8s Container Linux Part 2

Second part of my adventures and mistakes in trying to get untrusted workloads on k8s (with container linux underneath)

vados

6 minute read

tl;dr - I came across rkt’s ability to use alternate stage 1s, got it working, but then abandoned it due to problems getting rook running and a lack of CRI compatability (at the time), before even trying to compare with the QEMU-in-a-pod approach. These notes are very old (I don’t use container linux for my cluster anymore) and I can’t believe I quit so quickly without more thorough investigation but evidently I did so there’s not much to see in this post, but maybe it…

Running Untrusted Workloads K8s Container Linux Part 1

Adventures and mistakes in trying to get untrusted workloads on k8s (with container linux underneath)

vados

31 minute read

tl;dr - I kinda succeeded in getting simplistic VM level isolation working on a container linux powered Kubernetes cluster with lots of failures along the way. This post is cobbled-together notes from the exploration stage, which ultimately lead to an extremely hackish CoreOS VM powered by qemu running inside a privileged Kubernetes pod running on top of a CoreOS dedicated machine. The notes that were cobbled together to make this post are very old, I’ve actually already switched to Ubuntu…

Hetzner fresh Ubuntu (18.04 LTS) install to single node Kubernetes cluster with ansible

I install Kubernetes on yet another linux distribution, this time Ubuntu 18.04 LTS, using ansible to make it happen

vados

30 minute read

tl;dr - I installed Kubernetes on Ubuntu 18.04 LTS via Ansible (kubeadm under the covers) on a Hetzner dedicated server. Before doing so, I debugged/tested the playbook in a local VirtualBox VM with a fresh Ubuntu install before attempting on the dedicated hardware. There’s a gitlab repo (ansible-hetzner-ubuntu-1804-k8s-setup) that contains a copy-paste job of the finished work – the idea is that you should be able to run that playbook and go from a fresh Hetzner dedicated Ubuntu…

Securing Your Kubernetes Cluster

Good resources I've stumbled upon discussing securing your Kubernetes cluster(s)

vados

8 minute read

tl;dr - Check out Kubernetes features like PodSecurityPolicy, NetworkPolicy. There are also Fantastic fun analogy-laden Talks from Kubecon 2017 (Austin) and Kubecon 2018 (Copenhagen). CIS standards for Kubernetes clusters exist. There are also companies like Aqua that produce tools like kube-bench that let you test your clusters CIS benchmarks. It’s also important to remember to secure the machine as well as the Kubernetes cluster – so the usual Unix server administration advice…

k8s Container Linux ignition with rkt and kube-router

Modifying a recently created Container Linux ignition configuration to use rkt and kube-router (instead of containerd and canal).

vados

7 minute read

I recently wrote a post about about switching back to container linux for my small Kubernetes cluster, in which I outlined everything i needed to do to get it up and running. Even more recently, I decided I wanted to go ahead and run “rktnetes” to try and take advantage of it’s annotation-powered stage1 selection, and figured I should post that up too for any fellow rkt enthusiasts!

Yet another cluster re-install after switching back to Container Linux

Switching back to Container Linux, and using kubeadm + ignition to install Kubernetes.

vados

13 minute read

tl;dr - After a bad kernel upgrade (pacman -sYu) on my Arch-powered server I decided to go back to Container Linux, after being equal parts annoyed by Arch and encouraged by the Press release put out by red hat. This time, I spent much more time with the Ignition config files in conjunction with kubeadm and ended up with a bootable master node. Feel free to check out the tldr at the end.