k8s Container Linux ignition with rkt and kube-router

Modifying a recently created Container Linux ignition configuration to use rkt and kube-router (instead of containerd and canal).

vados

7 minute read

I recently wrote a post about about switching back to container linux for my small Kubernetes cluster, in which I outlined everything i needed to do to get it up and running. Even more recently, I decided I wanted to go ahead and run “rktnetes” to try and take advantage of it’s annotation-powered stage1 selection, and figured I should post that up too for any fellow rkt enthusiasts!

SQLite Is threadsafe, parallel access safe, but I still want to extend it.

Some thoughts on Sqlite's thread safety, parallel access safety, and how I want to extend it

vados

8 minute read

tl;dr - I’ve been doing a lot of work with SQLite lately, using it in one of my projects to try and test it’s limits before I moved to something like Postgres. I wanted to scale the API up and after ensuring that SQLite was thread safe and parallel access safe, I tried but was limited by shared file system mode limitations of my platform (kubernetes). This post details some prior art (rqlite, dqlite) and what I want to make for distributing SQLite, which is expressly not what SQLite…

Yet another cluster re-install after switching back to Container Linux

Switching back to Container Linux, and using kubeadm + ignition to install Kubernetes.

vados

13 minute read

tl;dr - After a bad kernel upgrade (pacman -sYu) on my Arch-powered server I decided to go back to Container Linux, after being equal parts annoyed by Arch and encouraged by the Press relesae put out by red hat. This time, I spent much more time with the Ignition config files in conjunction with kubeadm and ended up with a bootable master node. Feel free to check out the tldr at the end.

Minimal effort build improvements and a GHC 8.2.2 upgrade

How attempting to speed up my CI builds led to upgrading to GHC 8.2.2 (and eventually speeding up my CI builds)

vados

19 minute read

tl;dr - On a Haskell project I’m working on I started with >~20 minute cold-cache builds in the worst case in my Gitlab-powered CI environment then found some small ways to improve. Very recently I decided I wasn’t satisfied with ~10 / 15 minute builds and did the laziest, least-effort steps I could find to get to <10 minute cold-cache builds (~5min best case). Check out the [TLDR][tldr] section to see the Dockerfiles and steps I took summarized.

The cure for impostor syndrome is knowing things

Insight into how I avoid impostor syndrome, and a mindset I think could benefit everyone. Pretty much a rant.

vados

11 minute read

WARNING - This is a opinion-based fluff piece, read at your own risk. There’s no technical writeup or deep technical insight to be gained in this article, if ever there was in any of my posts to begin with.

Better k8s monitoring part 3: Adding request tracing with OpenTracing (powered by Jaeger)

Deploying Jaeger to enable tracing support for an application on my kubernetes cluster

vados

61 minute read

tl;dr - I spent a bunch of time stumbling through getting kim/opentracing integrated into my small Servant powered web app. In the end I actually switched to servant-tracing due to some issues integrating, and was able to get it working – there’s a TON of wandering in this post (basically half the time you’re reading an approximation of my stream of consciousness, some might consider the experiments with kim/opentracing a waste of time, but I do not), so please check out the…

Switching From kube-lego To cert-manager

Switching from kube-lego to cert-manager (without Helm)

vados

16 minute read

tl;dr - I switched from Jetstack’s kube-lego to cert-manager (it’s natural successor), and am pretty happy with the operator pattern they’ve decided to adopt, switch over was easy, but I tripped myself up for a bit because I don’t like using Helm. Complete resource definitions (that worked for me, YMMV) are in the TLDR section @ the bottom.

Better k8s Monitoring Part 2: Adding logging management with the EFKK stack

(FluentD + ElasticSearch + Kibana) x Kubernetes

vados

44 minute read

tl;dr - I started trying to set up EFK (Elastic, FluentD, Kibana), and hit frustrating integration issues/bugs with Elastic+Kibana 6.x, tripped myself up a LOT, almost gave up and went with Graylog, but then rallied to finish setting everything up by basically fixing my own bad configuration. Skip to the TLDR section for the hand-written working k8s configuration.