Up and Running With Linkerd v1

Getting Linkerd 1 working on a small app in my Kubernetes cluster

vados

37 minute read

tl;dr - I got linkerd v1 working on a small 3-tier app on my k8s cluster. Linkerd v1 sports an older (but much more battle-tested) and simpler model in that it runs a proxy on every node as a DaemonSet. Linkerd v2 runs a set of control programs (the “control plane”) and per-application sidecar containers that act as proxies, and while that’s cool I’m not going to run it just yet.

Running Untrusted Workloads K8s Container Linux Part 3

Third and final part of my adventures and mistakes in trying to get untrusted workloads running on k8s (with container linux underneath)

49 minute read

tl;dr - After struggling through settting up containerd’s untrusted workload runtime, building a static kata-runtime and a neutered-but-static qemu-system-x86_64 to use, I succeeded in hooking up containerd to use kata-runtime only to fail @ the last step since the pods that were created ran qemu properly but couldn’t be communicated with and would immediately make k8s node they were running on move to the NotReady due to PLEG errors. I did a lot of work to partially succeed (if you…

Running Untrusted Workloads K8s Container Linux Part 2

Second part of my adventures and mistakes in trying to get untrusted workloads on k8s (with container linux underneath)

vados

6 minute read

tl;dr - I came across rkt’s ability to use alternate stage 1s, got it working, but then abandoned it due to problems getting rook running and a lack of CRI compatability (at the time), before even trying to compare with the QEMU-in-a-pod approach. These notes are very old (I don’t use container linux for my cluster anymore) and I can’t believe I quit so quickly without more thorough investigation but evidently I did so there’s not much to see in this post, but maybe it…

Running Untrusted Workloads K8s Container Linux Part 1

Adventures and mistakes in trying to get untrusted workloads on k8s (with container linux underneath)

vados

31 minute read

tl;dr - I kinda succeeded in getting simplistic VM level isolation working on a container linux powered Kubernetes cluster with lots of failures along the way. This post is cobbled-together notes from the exploration stage, which ultimately lead to an extremely hackish CoreOS VM powered by qemu running inside a privileged Kubernetes pod running on top of a CoreOS dedicated machine. The notes that were cobbled together to make this post are very old, I’ve actually already switched to Ubuntu…

Hetzner fresh Ubuntu (18.04 LTS) install to single node Kubernetes cluster with ansible

I install Kubernetes on yet another linux distribution, this time Ubuntu 18.04 LTS, using ansible to make it happen

vados

30 minute read

tl;dr - I installed Kubernetes on Ubuntu 18.04 LTS via Ansible (kubeadm under the covers) on a Hetzner dedicated server. Before doing so, I debugged/tested the playbook in a local VirtualBox VM with a fresh Ubuntu install before attempting on the dedicated hardware. There’s a gitlab repo (ansible-hetzner-ubuntu-1804-k8s-setup) that contains a copy-paste job of the finished work – the idea is that you should be able to run that playbook and go from a fresh Hetzner dedicated Ubuntu…

Better k8s monitoring part 3: Adding request tracing with OpenTracing (powered by Jaeger)

Deploying Jaeger to enable tracing support for an application on my kubernetes cluster

vados

61 minute read

tl;dr - I spent a bunch of time stumbling through getting kim/opentracing integrated into my small Servant powered web app. In the end I actually switched to servant-tracing due to some issues integrating, and was able to get it working – there’s a TON of wandering in this post (basically half the time you’re reading an approximation of my stream of consciousness, some might consider the experiments with kim/opentracing a waste of time, but I do not), so please check out the…

Switching From kube-lego To cert-manager

Switching from kube-lego to cert-manager (without Helm)

vados

16 minute read

tl;dr - I switched from Jetstack’s kube-lego to cert-manager (it’s natural successor), and am pretty happy with the operator pattern they’ve decided to adopt, switch over was easy, but I tripped myself up for a bit because I don’t like using Helm. Complete resource definitions (that worked for me, YMMV) are in the TLDR section @ the bottom.

Better k8s Monitoring Part 2: Adding logging management with the EFKK stack

(FluentD + ElasticSearch + Kibana) x Kubernetes

vados

45 minute read

tl;dr - I started trying to set up EFK (Elastic, FluentD, Kibana), and hit frustrating integration issues/bugs with Elastic+Kibana 6.x, tripped myself up a LOT, almost gave up and went with Graylog, but then rallied to finish setting everything up by basically fixing my own bad configuration. Skip to the TLDR section for the hand-written working k8s configuration.

Better K8s Monitoring Part 1: Adding Prometheus

Adding better monitoring for applications running in my k8s cluster using Prometheus.

vados

10 minute read

It’s been a while since I learned of the wonders (and cleared up my misconceptions) of dedicated hosting and set up a “Baremetal” CoreOS single-node k8s cluster. For a while now I’ve maintained a single large (by my standards) machine that has been running Kubernetes, and purring right along – outside of the occasional restart or operator error, it hasn’t gone down and has kept my applications running. While most of the applications don’t get much…

Switch From ployst/docker-letsencrypt to Jetstack's kube-lego

Switching from ployst/docker-letsencrypt to jetstack/kube-lego for auto-generated SSL certs with Kubernetes.

vados

7 minute read

tl;dr - I switched from ployst/docker-letsencrypt which I considered less complicated than jetstack/kube-lego initially. Turns out jetstack/kube-lego is pretty simple and *just works* which is amazing, props to the team over at jetstack and as always the kubernetes team, for making this more intelligent automation possible. You could honestly just read the jetstack/kube-lego guide, it’s real good. If you wanna see my path through it, keep reading.