tl;dr - I tried to get LXD working on Container Linux but stopped short. Maybe if anyone picks it up (assuming the
lxd team doesn’t tackle it eventually), they can learn from my failed effort.
tl;dr - Check out Kubernetes features like
PodSecurityPolicy, NetworkPolicy. There are also Fantastic fun analogy-laden Talks from Kubecon 2017 (Austin) and Kubecon 2018 (Copenhagen). CIS standards for Kubernetes clusters exist. There are also companies like Aqua that produce tools like
kube-bench that let you test your clusters CIS benchmarks. It’s also important to remember to secure the machine as well as the Kubernetes cluster – so the usual Unix server administration advice…
tl;dr - Why don’t we use
<project>-infra repos, git-crypt, and good naming conventions instead of Helm
tl;dr - I applied a few patterns I’ve used on other projects to a Gitlab CI-powered rust project to achieve <2min builds. Basically just caching at different layers – caching via the docker image builder pattern at the docker level, aggressive caching with Gitlab CI at the CI runner level, also one more step of combining some build steps (probably unnecessarily).
I recently wrote a post about about switching back to container linux for my small Kubernetes cluster, in which I outlined everything i needed to do to get it up and running. Even more recently, I decided I wanted to go ahead and run “rktnetes” to try and take advantage of it’s annotation-powered stage1 selection, and figured I should post that up too for any fellow
tl;dr - After a bad kernel upgrade (
pacman -sYu) on my Arch-powered server I decided to go back to Container Linux, after being equal parts annoyed by Arch and encouraged by the Press release put out by red hat. This time, I spent much more time with the Ignition config files in conjunction with kubeadm and ended up with a bootable master node. Feel free to check out the tldr at the end.
tl;dr - On a Haskell project I’m working on I started with >~20 minute cold-cache builds in the worst case in my Gitlab-powered CI environment then found some small ways to improve. Very recently I decided I wasn’t satisfied with ~10 / 15 minute builds and did the laziest, least-effort steps I could find to get to <10 minute cold-cache builds (~5min best case). Check out the [TLDR][tldr] section to see the Dockerfiles and steps I took summarized.
tl/dr; an update (
sudo pacman -Syu) to a server I manage running Arch messed up the boot process of my server, due to interaction between RAID and GRUB, and I stumbled my way through debugging it.
tl;dr - I spent a bunch of time stumbling through getting
kim/opentracing integrated into my small Servant powered web app. In the end I actually switched to
servant-tracing due to some issues integrating, and was able to get it working – there’s a TON of wandering in this post (basically half the time you’re reading an approximation of my stream of consciousness, some might consider the experiments with
kim/opentracing a waste of time, but I do not), so please check out the…