tl;dr - I switched from ployst/docker-letsencrypt which I considered less complicated than jetstack/kube-lego initially. Turns out jetstack/kube-lego is pretty simple and *just works* which is amazing, props to the team over at jetstack and as always the kubernetes team, for making this more intelligent automation possible. You could honestly just read the jetstack/kube-lego guide, it’s real good. If you wanna see my path through it, keep reading.
Up until now I’ve been using ployst/docker-letsencrypt, and it’s been working fine, however I’ve longed for a solution that didn’t require me to manually kubectl exec
scripts, and kube-lego is that tool. To add a little background (for those who maybe haven’t read the previous Kubernetes-related posts on this blog), I’m running a single-cluster “bare-metal” Kubernetes installation on a dedicated machine I purchased running CoreOS. Since I have the NGINX Ingress Controller setup (I’m not in the managed-cloud environment so I can’t use those ingresses), the kube-lego guide for NGINX was where I started looking (after the README of course). My setup is just slightly different that what’s in the guide (as I already have an ingress running and will have to change it, rather than set up a new one).
SIDENOTE: it looks like Letsencrypt is going to start supporting wildcard certificates in early 2018, so that’s pretty awesome!
As always, the first step I take is to read as much of the manual as I can stomach, trying to get the process that’s about to take place clear in my head. After reading through the quick explanation of how kube-lego
works, I took a look at the NGINX example which was pretty well laid out and obvious.
Of course, if you’re not familiar with Let’s Encrypt, check it out too – and maybe donate! I have. If you’re a working developer and benefitting from it (or even if you’re not), please find some small (or large) amount to donate to the project, it’s enabling a LOT of security across the web.
I made almost an exact copy of the example configuration:
kube-lego.yaml
:
---
apiVersion: v1
kind: Namespace
metadata:
name: kube-lego
---
apiVersion: v1
metadata:
name: kube-lego
namespace: kube-lego
data:
# modify this to specify your address
lego.email: "email@example.com"
# configure letsencrypt's production api
lego.url: "https://acme-v01.api.letsencrypt.org/directory"
kind: ConfigMap
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kube-lego
namespace: kube-lego
spec:
replicas: 1
template:
metadata:
labels:
app: kube-lego
spec:
containers:
- name: kube-lego
image: jetstack/kube-lego:0.1.5
imagePullPolicy: Always
ports:
- containerPort: 8080
env:
- name: LEGO_EMAIL
valueFrom:
configMapKeyRef:
name: kube-lego
key: lego.email
- name: LEGO_URL
valueFrom:
configMapKeyRef:
name: kube-lego
key: lego.url
- name: LEGO_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: LEGO_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 1
This is basically exactly what’s in the example repo at this point in time. If you’re reading this, you should probably just go straight to repository, in case things have changed. I’ve put all the resources into one file since that’s how I normally do it (and haven’t found a super compelling reason not to yet), so I can easily run kubectl apply -f kube-lego.yaml
and be done.
One thing I haven’t been doing and should probably do more is use the ConfigMap
pattern how they’re using it, to make managing configuration easier. At present, I just change the actual resource file, then re-apply, but theoretically I could go into the Kubernetes Dashboard that’s running in the cluster and change things there and let the relveant pods/deployments restart.
Creating the resources was easy enough:
$ kubectl apply -f kube-lego.yaml
namespace "kube-lego" created
configmap "kube-lego" created
deployment "kube-lego" created
Note that you have to look in the kube-lego
namespace (part of the resource config) to check the pods/deployments:
$ kubectl get deployments -n kube-lego
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
kube-lego 1 1 1 1 16m
$ kubectl get pods -n kube-lego
NAME READY STATUS RESTARTS AGE
kube-lego-2933009699-z77mf 1/1 Running 0 16m
I also checked what the logs looked like just in case:
$ kubectl logs kube-lego-2933009699-z77mf -n kube-lego
time="2017-11-06T02:38:05Z" level=info msg="kube-lego 0.1.5-a9592932 starting" context=kubelego
time="2017-11-06T02:38:05Z" level=info msg="connecting to kubernetes api: https://10.3.0.1:443" context=kubelego
time="2017-11-06T02:38:05Z" level=info msg="successfully connected to kubernetes api v1.7.2+coreos.0" context=kubelego
time="2017-11-06T02:38:05Z" level=info msg="server listening on http://:8080/" context=acme
Looks like like everyting is well and truly working properly. I find myself saying it more and more, but I want to stop and point out how easy this robust deployment was is a marvel enabled by the consistency and robustness of Kubernetes + Docker + LXC + lots of other giants we’re currently standing on the shoulders of. It’s relatively easy to make software, but it’s hard to make robust, simple to use software, with consistent concepts and good documentation.
kube-lego
Now that the kube-lego
deployment seems to be working properly, I want to ensure that there aren’t any settings I’m missing from my local NGINX ingress controller.
Looking at the code in the example repository, the deployment doesn’t look particularly special so I think I’m good in that area. I’m already forwarding port 80 and 443 to the ingress controller so I don’t need to worry about the associated service resource either, I think.
I skipped a lot of steps in the example that just didn’t apply (like creating the echoserver
app).
kube-lego
wayNow that the kube-lego
deployment seems to be fine and the ingress controller seems to be configured properly (and it’s certainly running), it’s time to try and convert one currently malfunctioning app over to the kube-lego
way.
To do this, I made the ingress for the service look like the echoserver
example (reproduced below):
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: echoserver
namespace: echoserver
annotations:
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- hosts:
- echo.example.com
secretName: echoserver-tls
rules:
- host: echo.example.com
http:
paths:
- path: /
backend:
serviceName: echoserver
servicePort: 80
There was a bit of an issue: my annotations look different:
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: <redacted>-ing
annotations:
ingress.kubernetes.io/class: "nginx"
ingress.kubernetes.io/tls-acme: "true"
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/ssl-redirect: "true"
ingress.kubernetes.io/limit-rps: "20"
# ... more stuff ...
Note the ingress.kubernetes.io/*
vs kubernetes.io/*
configurations… Just to be safe I put both types there, but I need to figure out which I should be using and get rid of the others (the NGINX Ingress controller seemed to use the ingress.kubernetes.io
annotations, maybe that’s changed since I installed it). For now just going to use both and sort it out later.
Time to let ’er rip! The ingress, service and deployment all started without any issues, so I was cautiously optimistic that kube-lego
was already hard at work getting me certs and doing stuff.
I checked the logs and was SO pleased to find:
time="2017-11-06T02:38:05Z" level=info msg="kube-lego 0.1.5-a9592932 starting" context=kubelego
time="2017-11-06T02:38:05Z" level=info msg="connecting to kubernetes api: https://10.3.0.1:443" context=kubelego
time="2017-11-06T02:38:05Z" level=info msg="successfully connected to kubernetes api v1.7.2+coreos.0" context=kubelego
time="2017-11-06T02:38:05Z" level=info msg="server listening on http://:8080/" context=acme
time="2017-11-06T03:13:56Z" level=info msg="ignoring as has no annotation 'kubernetes.io/tls-acme'" context=ingress name=<redacted> namespace=default
time="2017-11-06T03:13:56Z" level=info msg="ignoring as has no annotation 'kubernetes.io/tls-acme'" context=ingress name=<redacted> namespace=default
time="2017-11-06T03:13:56Z" level=info msg="ignoring as has no annotation 'kubernetes.io/tls-acme'" context=ingress name=<redacted> namespace=default
time="2017-11-06T03:13:56Z" level=info msg="ignoring as has no annotation 'kubernetes.io/tls-acme'" context=ingress name=<redacted> namespace=default
time="2017-11-06T03:13:56Z" level=info msg="ignoring as has no annotation 'kubernetes.io/tls-acme'" context=ingress name=<redacted> namespace=default
time="2017-11-06T03:13:56Z" level=info msg="ignoring as has no annotation 'kubernetes.io/tls-acme'" context=ingress name=<redacted> namespace=default
time="2017-11-06T03:13:56Z" level=info msg="process certificate requests for ingresses" context=kubelego
time="2017-11-06T03:13:56Z" level=info msg="Attempting to create new secret" context=secret name=<redacted>-tls namespace=default
time="2017-11-06T03:13:56Z" level=info msg="no cert associated with ingress" context="ingress_tls" name=<redacted>-ing namespace=default
time="2017-11-06T03:13:56Z" level=info msg="requesting certificate for <redacted>.me,<redacted>.com" context="ingress_tls" name=<redacted>-ing namespace=default
time="2017-11-06T03:13:56Z" level=info msg="Attempting to create new secret" context=secret name=kube-lego-account namespace=kube-lego
time="2017-11-06T03:13:56Z" level=info msg="if you don't accept the TOS (https://letsencrypt.org/documents/LE-SA-v1.1.1-August-1-2016.pdf) please exit the program now" context=acme
time="2017-11-06T03:13:57Z" level=info msg="created an ACME account (registration url: https://acme-v01.api.letsencrypt.org/acme/reg/<redacted: long numberic key>)" context=acme
time="2017-11-06T03:13:57Z" level=info msg="Attempting to create new secret" context=secret name=kube-lego-account namespace=kube-lego
time="2017-11-06T03:13:57Z" level=info msg="Secret successfully stored" context=secret name=kube-lego-account namespace=kube-lego
time="2017-11-06T03:14:00Z" level=info msg="authorization successful" context=acme domain=<redacted>.me
time="2017-11-06T03:14:00Z" level=info msg="authorization successful" context=acme domain=<redacted>.com
time="2017-11-06T03:14:00Z" level=info msg="authorization successful" context=acme domain=<redacted>.me
time="2017-11-06T03:14:01Z" level=info msg="successfully got certificate: domains=[<redacted>.me <redacted>.me <redacted>.com] url=https://acme-v01.api.letsencrypt.org/acme/cert/<redacted: long alpha numberic key>" context=acme
time="2017-11-06T03:14:01Z" level=info msg="Attempting to create new secret" context=secret name=<redacted>-tls namespace=default
time="2017-11-06T03:14:01Z" level=info msg="Secret successfully stored" context=secret name=<redacted>-tls namespace=default
I absolutely love it when things just work (who doesn’t?), and it’s little victories like these that make me really happy with my choice to learn and stick with Kubernetes
This process was super easy, the guide was very helpful, the examples were just about spot on for what I needed, and I had a super easy breezy process. Thanks to the team @ jetstack for making all this possible, and open sourcing their amazing cluster add-on. Thanks of course to the Kubernetes community and kubernetes team & devs for making all this possible, and saving me so much time.
I definitely fully get behind migrating from ployst/docker-letsencrypt to kube-lego if you can, I can’t believe how easy it was.