tl;dr - I set up a mailing list (for this blog) with Mailtrain on my tiny k8s cluster. Along the way I created a small rust binary for converting POST-ed forms to mailtrain API calls and a Mithril component for mailing list signupg call-to-actions.
A reader named Damien pointed out that I didn't leave a decent example for how I was using kustomize
so I wanted to add to the mrman/makeinfra-pattern
repository to point out how.
I updated the mrman/makeinfra-pattern repo on GitLab so that it now contains an example of how you could go about adding this to your setup. If you're interested specifically in the diff (what it took to get to kustomize
integration), you can check that out as well.
I’ve been running this small blog for a while, and one of the things that I’ve never considered is that people might actually want to receive updates when I make new “content”. Up until now, whenever I’ve put up notifications on open forums related to the specific communities (mostly reddit, so r/haskell and r/kubernetes for example), or maybe thrown a submission to HackerNews, but as patio11 (If you’re not familiar with Patrick, read his post on salary negotiation for an intro) would (probably) say, email is a very underrated communication mechanism.
Since this post is going to assume a lot of Kubernetes knowledge, those unfamiliar might want to stop reading take a look at the kubernetes documentation. This post is not a guide to kubernetes and already assumes you know the concepts.
mailtrain depends on quite a few pieces of software to run:
Obviously, you’re going to want to be somewhat familiar with those technologies before getting started. At least knowing what they are and why they exist is important, and how to do the minimal amount of administration and security on them.
A note on why Postgres would have been better - Postgres could have done a decent job providing the features of MariaDB/MySQL, MongoDB and Redis without breaking a sweat and tremendously reduced the dependencies, along with bringing in lots of features that mailtrain could have used. Some future me with more time would have dug around in the mailtrain
codebase and tried to see if they’d made interfaces for these components and see if I could add a postgres implementation for each. I didn’t actually go spelunking through the code but there is an issue for Postgres support (issue #2!!) that’s basically dead, so maybe it’s better I didn’t take a look.
kustomize
for templating with the Makeinfra patternAs I’ve mentioned on this blog in the past, I manage and run my infrastructure with Makefiles – an approach I’ve nicknamed “Makeinfra” pattern. For this entry in my -infra
repository, though, I’m going to be replacing venerable envsubst
with kustomize
(which was recently merged into kubectl
) to make it even easier.
While Makefile
s are not without downsides, make
is very widely supported and can paper over multiple languages for a consistent interface to the software build process. Since I’m using make
it’s a bit easier to replace envsubst
with kustomize
without too much shock, as the commands I run (ex. make deployment
) don’t change. I did run into one teething issue with using kustomize
– Kubernetes 1.16’s kubectl
currently ships with a version of kustomzie
that is too old to use the -o
option as an output path, so I had to download kustomize
manually. I was all-in-all pretty happy with how the kustomize
-powered version of the makeinfra pattern turned out, the Makefile
is massively simplified.
Here’s part of the updated Makefile
to give a sense of how it wil be used:
GENERATED_DIR ?= generated
generated-folder:
mkdir -p $(GENERATED_DIR)
generate: generated-folder
$(KUSTOMIZE) build -o $(GENERATED_DIR)/
# ... more targets ...
redis-pvc:
$(KUBECTL) apply -f $(GENERATED_DIR)/*_persistentvolumeclaim_$(PREFIX)redis*.yaml
redis-deployment:
$(KUBECTL) apply -f $(GENERATED_DIR)/*_deployment_$(PREFIX)redis*.yaml
redis-svc:
$(KUBECTL) apply -f $(GENERATED_DIR)/*_service_$(PREFIX)redis*.yaml
redis-uninstall:
$(KUBECTL) delete -f $(GENERATED_DIR)/*_deployment_$(PREFIX)redis*.yaml
$(KUBECTL) delete -f $(GENERATED_DIR)/*_service_$(PREFIX)redis*.yaml
redis-destroy-data:
$(KUBECTL) delete -f $(GENERATED_DIR)/*_persistentvolumeclaim_$(PREFIX)redis*.yaml
And here’s what kustomize.yaml
looks like:
namePrefix: mailtrain-
namespace: vadosware-blog
secretGenerator:
- name: mariadb-secrets
files:
- secrets/mariadb.MYSQL_USER.secret
- secrets/mariadb.MYSQL_PASSWORD.secret
- secrets/mariadb.MYSQL_ROOT_PASSWORD.secret
- name: mailtrain-secrets
files:
- secrets/mailtrain.WWW_SECRET.secret
resources:
# redis
- redis.svc.yaml
- redis.deployment.yaml
- redis.pvc.yaml
# mongodb
- mongodb.svc.yaml
- mongodb.deployment.yaml
- mongodb.pvc.yaml
# mariadb
- mariadb.svc.yaml
- mariadb.deployment.yaml
- mariadb.pvc.yaml
# mailtrain
- mailtrain.svc.yaml
- mailtrain.deployment.yaml
- mailtrain.pvc.yaml
- mailtrain.ingress.yaml
commonLabels:
app: vadosware-mailtrain
Note that the YAML in this post will be the generated YAML, not the template to make it easier for those who aren’t familiar with kustomize
.
OK, so at this point I’m very familiar with Kubernetes so I’m going to jump into just building the Deployment
s, Service
s and Ingress
es that I need to make mailtrain available. A successful Mailtrain installation exposes 3 endpoints (port 3000 for trusted access, 3003 for sandbox, 3004 for public), so I’ll probably need to have an Ingress
listening on 3004 pointing to a Service
that exposes an Endpoint
pointing to the Deployment
of Mailtrain. Of course, Mailtrain itself will need to be connected to the relevant backing services (mysql, redis, etc).
While it’s not strictly necessary I will be making separate files for mailtrain itself and the related services. Being able to configure/modify the dependencies separately from mailtrain itself is more work, but much cleaner and worth the effort up front in my mind (and slight loss in performance from DNS etc).
Before we start we’ll need a namespace to kick everything off so of course make sure you have the namespace you’re going to use created:
mailtrain.ns.yaml
---
kind: Namespace
apiVersion: v1
metadata:
name: mailtrain
We’ll start with the easiest to deploy service – a redis
instance for mailtrain to use. First we’ll need to give it a place to store data, a PersistentVolumeClaim
. My PVCs are powered by OpenEBS (I’ve written about my setup before) so that’s what’s powering the PersistentVolumeClaim
below:
generated/~g_v1_persistentvolumeclaim_mailtrain-redis-data.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: vadosware-mailtrain
tier: data
name: mailtrain-redis-data
namespace: vadosware-blog
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: openebs-jiva-non-ha
And now for the Deployment
that will use the PersistentVolumeClaim
mentioned above:
generated/apps_v1_deployment_vadosware-mailtrain-redis.yaml
:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: vadosware-mailtrain
name: mailtrain-redis
namespace: vadosware-blog
spec:
replicas: 1
selector:
matchLabels:
app: vadosware-mailtrain
component: redis
template:
metadata:
labels:
app: vadosware-mailtrain
component: redis
spec:
containers:
- image: redis:5.0.7-alpine
name: redis
ports:
- containerPort: 6379
name: redis
protocol: TCP
resources:
limits:
cpu: 1
volumeMounts:
- mountPath: /var/lib/redis
name: data
volumes:
- name: data
persistentVolumeClaim:
claimName: mailtrain-redis-data
And here’s the Service
to expose it that deployment to other local Pod
s:
generated/~g_v1_service_mailtrain-redis.yaml
:
apiVersion: v1
kind: Service
metadata:
labels:
app: vadosware-mailtrain
name: mailtrain-redis
namespace: vadosware-blog
spec:
ports:
- name: redis
port: 6379
protocol: TCP
targetPort: 6379
selector:
app: vadosware-mailtrain
component: redis
Now on to some slightly more complicated software, we’ll add MongoDB. First we need a PVC to work with:
generated/~g_v1_persistentvolumeclaim_mailtrain-mongodb.yaml
:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: vadosware-mailtrain
name: mailtrain-mongodb
namespace: vadosware-blog
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
storageClassName: openebs-jiva-non-ha
Then a Deployment
for mongo:
generated/apps_v1_deployment_mailtrain-mongodb.yaml
:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: vadosware-mailtrain
name: mailtrain-mongodb
namespace: vadosware-blog
spec:
replicas: 1
selector:
matchLabels:
app: vadosware-mailtrain
tier: mongodb
template:
metadata:
labels:
app: vadosware-mailtrain
tier: mongodb
spec:
containers:
- image: mongo:4.2.3-bionic
imagePullPolicy: IfNotPresent
name: mongo
ports:
- containerPort: 27017
protocol: TCP
volumeMounts:
- mountPath: /data/db
name: data
volumes:
- name: data
persistentVolumeClaim:
claimName: mailtrain-mongodb
And here’s the Service
to expose it to other pods in the namespace:
generated/~g_v1_service_mailtrain-mongodb.yaml
:
apiVersion: v1
kind: Service
metadata:
labels:
app: vadosware-mailtrain
name: mailtrain-mongodb
namespace: vadosware-blog
spec:
ports:
- name: mongodb
port: 27017
protocol: TCP
targetPort: 27017
selector:
app: vadosware-mailtrain
tier: mongodb
Moving on to even more complicated software, let’s add the MariaDB instance. We’ll start with a PVC:
generated/~g_v1_persistentvolumeclaim_mailtrain-mariadb.yaml
:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: vadosware-mailtrain
name: mailtrain-mariadb
namespace: vadosware-blog
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
storageClassName: openebs-jiva-non-ha
generated/apps_v1_deployment_mailtrain-mariadb.yaml
:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: vadosware-mailtrain
name: mailtrain-mariadb
namespace: vadosware-blog
spec:
replicas: 1
selector:
matchLabels:
app: vadosware-mailtrain
template:
metadata:
labels:
app: vadosware-mailtrain
tier: mariadb
spec:
containers:
- args:
- --character-set-server=utf8mb4
- --collation-server=utf8mb4_unicode_ci
env:
- name: MYSQL_DATABASE
value: mailtrain
- name: MYSQL_USER
valueFrom:
secretKeyRef:
key: mariadb.MYSQL_USER.secret
name: mailtrain-mariadb-secrets-<kustomize-generated randomized id>
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
key: mariadb.MYSQL_PASSWORD.secret
name: mailtrain-mariadb-secrets-<kustomize-generated randomized id>
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
key: mariadb.MYSQL_ROOT_PASSWORD.secret
name: mailtrain-mariadb-secrets-<kustomize-generated randomized id>
image: mariadb:10.4.12-bionic
imagePullPolicy: IfNotPresent
name: mariadb
ports:
- containerPort: 3306
volumeMounts:
- mountPath: /var/lib/mysql
name: data
volumes:
- name: data
persistentVolumeClaim:
claimName: mailtrain-mariadb
And here’s the Service
to expose it to other pods in the namespace:
generated/~g_v1_service_mailtrain-mariadb.yaml
:
apiVersion: v1
kind: Service
metadata:
labels:
app: vadosware-mailtrain
name: mailtrain-mariadb
namespace: vadosware-blog
spec:
ports:
- name: mariadb
port: 3306
protocol: TCP
targetPort: 3306
selector:
app: vadosware-mailtrain
tier: mariadb
One thing that’s different about mariadb in comparison to the other infra pieces is that we do need to do some configuration. The Secret
s that we’ll need have been generated by kustomize
so we just need to pretend they exist (kustomize
will modify the names as necessary deep in the Deployment
). Here’s what the generated secrets file looks like:
generated/~g_v1_secret_mailtrain-mariadb-<kustomize-generated randomized id>.yaml
:
apiVersion: v1
data:
mariadb.MYSQL_PASSWORD.secret: <base 64 encoded secret>
mariadb.MYSQL_ROOT_PASSWORD.secret: <base 64 encoded secret>
mariadb.MYSQL_USER.secret: <base 64 encoded secret>
kind: Secret
metadata:
labels:
app: vadosware-mailtrain
name: mailtrain-mariadb-secrets-<kustomize-generated randomized id>
namespace: vadosware-blog
type: Opaque
Note that I’ve used basically one-line files to encode the secrets, since we only get to inject the key
. Also, you’re definitely going to want to encrypt your secrets if you’re going to store them in source control, so check out git-crypt
which is what I use.
Now it’s time to put mailtrain itself into service! First we’ll need a PersistentVolumeClaim
on which to store the data we’ll be using:
generated/~g_v1_persistentvolumeclaim_mailtrain-mailtrain.yaml
:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: vadosware-mailtrain
name: mailtrain-mailtrain
namespace: vadosware-blog
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
storageClassName: openebs-jiva-non-ha
And we’ll need an actual Deployment
which uses that ConfigMap
and starts mailtrain itself:
generated/apps_v1_deployment_mailtrain-mailtrain.yaml
:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: vadosware-mailtrain
name: mailtrain-mailtrain
namespace: vadosware-blog
spec:
replicas: 1
selector:
matchLabels:
app: vadosware-mailtrain
tier: mailtrain
template:
metadata:
labels:
app: vadosware-mailtrain
tier: mailtrain
spec:
containers:
- env:
- name: URL_BASE_PUBLIC
value: https://mailtrain.vadosware.io
- name: WWW_PROXY
value: "true"
- name: WWW_SECRET
valueFrom:
secretKeyRef:
key: mailtrain.WWW_SECRET.secret
name: mailtrain-mailtrain-secrets-<kustomize-generated randomized id>
- name: MONGO_HOST
value: mailtrain-mongodb
- name: REDIS_HOST
value: mailtrain-redis
- name: MYSQL_HOST
value: mailtrain-mariadb
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
key: mariadb.MYSQL_PASSWORD.secret
name: mailtrain-mariadb-secrets-<kustomize-generated randomized id>
# this image is latest mailtrain v2 beta as of 02/09/2020
image: mailtrain/mailtrain@sha256:77abbb4a5c94423283d989d1a906a38aef335a9ad2f02da81ba2f52fc677cfee
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
path: /
port: 3000
initialDelaySeconds: 15
periodSeconds: 300
name: mailtrain
ports:
- containerPort: 3000
protocol: TCP
- containerPort: 3003
protocol: TCP
- containerPort: 3004
protocol: TCP
readinessProbe:
httpGet:
path: /
port: 3000
initialDelaySeconds: 5
periodSeconds: 10
volumeMounts:
- mountPath: /data/db
name: data
volumes:
- name: data
persistentVolumeClaim:
claimName: mailtrain-mailtrain
And here’s the Service
to expose mailtrain to other pods in the namespace:
generated/~g_v1_service_mailtrain-mailtrain.yaml
:
apiVersion: v1
kind: Service
metadata:
labels:
app: vadosware-mailtrain
name: mailtrain-mailtrain
namespace: vadosware-blog
spec:
ports:
- name: mailtrain
port: 3004
protocol: TCP
targetPort: 3004
selector:
app: vadosware-mailtrain
tier: mailtrain
Much like MariaDB we’ll also need a Secret
so that we can set the WWW_SECRET
that is used. While not strictly necesary I like setting this so that in the case that I completely blew away the instance logins would still work (they’d be encrypted with the same key, so the server wouldn’t notice). Here’s what the generated secret looks like:
generated/~g_v1_secret_mailtrain-mailtrain-secrets-<>.yaml
:
apiVersion: v1
data:
mailtrain.WWW_SECRET.secret: <base64 encoded secret>
kind: Secret
metadata:
labels:
app: vadosware-mailtrain
name: mailtrain-mailtrain-secrets-<kustomize-generated randomized id>
namespace: vadosware-blog
type: Opaque
SKIP TO THE NEXT SECTION if you don’t care about me debugging my broken OpenEBS cluster
A little while back I let my certs expired and had a bad time trying to get everything back up. One of the holdovers from that was that OpenEBS has actually not been provisioning new volumes properly this entire time (oh no). While there haven’t been any new volumes to provision, the Pod
s had just been erroring (and restarting etc) due to their bad service accounts.
The fix is relatively easy – deleting the old ServiceAccount
s and recreating the DaemonSet
s and Deployment
s but that lead me into…
SKIP TO THE NEXT SECTION if you don’t care about me debugging my broken OpenEBS cluster
Once the old ServiceAccount
s were deleted and the relevant DaemonSet
s and Deployment
s were restarted, i found another issue – my PersistentVolumeClaim
s were stuck in the Pending
state:
$ k get pvc -n vadosware-blog
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mailtrain-mailtrain Pending openebs-jiva-non-ha 2m1s
mailtrain-mariadb Pending openebs-jiva-non-ha 2m21s
mailtrain-mongodb Pending openebs-jiva-non-ha 2m14s
mailtrain-redis-data Pending openebs-jiva-non-ha 2m8s
Seeing this the first instinct is to check k describe pvc <some-pvc> -n vadosware-blog
, and here’s what I saw:
$ k describe pvc mailtrain-redis-data -n vadosware-blog
Name: mailtrain-redis-data
Namespace: vadosware-blog
StorageClass: openebs-jiva-non-ha
Status: Pending
Volume:
Labels: app=vadosware-mailtrain
tier=data
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"labels":{"app":"vadosware-mailtrain","tier":"data"},"name"...
volume.beta.kubernetes.io/storage-provisioner: openebs.io/provisioner-iscsi
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Mounted By: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Provisioning <invalid> (x4 over 75s) openebs.io/provisioner-iscsi_openebs-provisioner-769d9998b6-tlqdz_d8e22cdb-4ad5-11ea-bb7a-7ee40f89b88b External provisioner is provisioning volume for claim "vadosware-blog/mailtrain-redis-data"
Warning ProvisioningFailed <invalid> (x4 over 45s) openebs.io/provisioner-iscsi_openebs-provisioner-769d9998b6-tlqdz_d8e22cdb-4ad5-11ea-bb7a-7ee40f89b88b failed to provision volume with StorageClass "openebs-jiva-non-ha": Get http://10.97.96.224:5656/latest/volumes/pvc-3a5c1bde-441f-4f4a-9b1f-843aa030fe72: dial tcp 10.97.96.224:5656: i/o timeout
Normal ExternalProvisioning <invalid> (x19 over 75s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "openebs.io/provisioner-iscsi" or manually created by system administrator
Well that’s a problem, we’re having some sort of “I/O timeout”… Is this trying to create too many PVCs at the same time? Looking at the logs for the provisioner I saw:
I0209 00:40:26.847873 1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"vadosware-blog", Name:"mailtrain-redis-data", UID:"5c9dbcc6-89f9-47ac-afcc-fcb1b5d7f80b", APIVersion:"v1", ResourceVersion:"97175284", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "openebs-jiva-non-ha": Get http://10.110.168.61:5656/latest/volumes/pvc-5c9dbcc6-89f9-47ac-afcc-fcb1b5d7f80b: dial tcp 10.110.168.61:5656: i/o timeout
If you note the port that it’s trying to get to is 5656, so that is the OpenEBS API server that it can’t reach. Some more digging:
$ k get svc -n openebs -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
maya-apiserver-service ClusterIP 10.97.96.224 <none> 5656/TCP 7m21s name=maya-apiserver
$ k get pods -n openebs -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
maya-apiserver-687bb87957-gqm4w 1/1 Running 0 7m37s 10.244.0.239 ubuntu-1810-cosmic-64-minimal <none> <none>
openebs-ndm-r9r7t 1/1 Running 0 6m53s 10.244.0.241 ubuntu-1810-cosmic-64-minimal <none> <none>
openebs-provisioner-769d9998b6-jzbl6 1/1 Running 0 11m 10.244.0.237 ubuntu-1810-cosmic-64-minimal <none> <none>
openebs-snapshot-operator-b5f848d64-px7bf 2/2 Running 0 6m55s 10.244.0.240 ubuntu-1810-cosmic-64-minimal <none> <none>
So based on this, we’d expect maya-apiserver-service
to have an Endpoint
that points to 10.244.0.239
, so let’s confirm that:
$ k describe svc maya-apiserver-service -n openebs
Name: maya-apiserver-service
Namespace: openebs
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"maya-apiserver-service","namespace":"openebs"},"spec":{"ports":[{...
Selector: name=maya-apiserver
Type: ClusterIP
IP: 10.97.96.224
Port: api 5656/TCP
TargetPort: 5656/TCP
Endpoints: 10.244.0.239:5656
Session Affinity: None
Events: <none>
So far so good, but for some reason the error we’re seeing in the provisioner’s logs indicates the provisioner is trying to connect to 10.110.168.61:5656
! Somehow the provisioner has the completely wrong idea of where the API server is located, so let’s just kill it… Now we’ve got a different issue:
And if we look in the logs again
^PI0209 00:50:07.729373 1 controller.go:991] provision "vadosware-blog/mailtrain-redis-data" class "openebs-jiva-non-ha": started
I0209 00:50:07.731675 1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"vadosware-blog", Name:"mailtrain-redis-data", UID:"64b5db54-e012-4d49-b80d-57d529921731", APIVersion:"v1", ResourceVersion:"97177067", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "vadosware-blog/mailtrain-redis-data"
E0209 00:50:07.792981 1 volume.go:170] HTTP Status error from maya-apiserver: Not Found
I0209 00:50:07.793147 1 volume.go:105] CAS Volume Spec Created:
<removed for space>
E0209 00:50:07.980885 1 volume.go:128] Internal Server Error: failed to create volume 'pvc-64b5db54-e012-4d49-b80d-57d529921731': response: the server could not find the requested resource
E0209 00:50:07.980911 1 cas_provision.go:118] Failed to create volume: <removed for space>
W0209 00:50:07.981073 1 controller.go:750] Retrying syncing claim "vadosware-blog/mailtrain-redis-data" because failures 2 < threshold 15
E0209 00:50:07.981112 1 controller.go:765] error syncing claim "vadosware-blog/mailtrain-redis-data": failed to provision volume with StorageClass "openebs-jiva-non-ha": Internal Server Error: failed to create volume 'pvc-64b5db54-e012-4d49-b80d-57d529921731': response: the server could not find the requested resource
I0209 00:50:07.981156 1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"vadosware-blog", Name:"mailtrain-redis-data", UID:"64b5db54-e012-4d49-b80d-57d529921731", APIVersion:"v1", ResourceVersion:"97177067", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "openebs-jiva-non-ha": Internal Server Error: failed to create volume 'pvc-64b5db54-e012-4d49-b80d-57d529921731': response: the server could not find the requested resource
OK, so the first line that look simportant is HTTP Status error from maya-apiserver: Not Found
– while I’m not sure what was not found, maybe the provisioner itself can’t find the API server? Services are exposed by name and I know that the openebs (“maya”) API server’s service name is (maya-apiserver-service
)…. Lo and behold, taking a look back at the provisioner’s resource config, there’s a comment:
- name: OPENEBS_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
# OPENEBS_MAYA_SERVICE_NAME provides the maya-apiserver K8s service name,
# that provisioner should forward the volume create/delete requests.
# If not present, "maya-apiserver-service" will be used for lookup.
# This is supported for openebs provisioner version 0.5.3-RC1 onwards
#- name: OPENEBS_MAYA_SERVICE_NAME
# value: "maya-apiserver-apiservice"
So everything looks to be in order here, why is it expecting the API server to be at such a weird IP address? Another issue is I don’t know which resource is missing! Let’s do some more digging – looking at the OpenEBS API server’s logs there’s a lot of output but here are some error lines:
$ k logs -f maya-apiserver-7fbdc84ddd-l52vc -n openebs | egrep ^E
E0209 01:40:10.652226 6 runner.go:213] failed to execute runtask: name 'jiva-volume-read-listtargetservice-default-0.8.0': meta yaml 'id: readlistsvc
E0209 01:40:10.652277 6 volume_endpoint_v1alpha1.go:175] failed to read cas template based volume: error 'controller service not found'
E0209 01:40:10.834507 6 runner.go:213] failed to execute runtask: name 'jiva-volume-create-putreplicadeployment-default-0.8.0': meta yaml 'id: createputrep
E0209 01:40:10.834657 6 meta.go:495] failed to build rollback instance for task 'createputrep': object name is missing: meta task '{MetaTaskIdentity:{Identity:createputrep Kind:Deployment APIVersion:extensions/v1beta1} MetaTaskProps:{RunNamespace:vadosware-blog Owner: ObjectName: Options: Retry: Disable:false} Action:put RepeatWith:{Resources:[] Metas:[]}}'
E0209 01:40:10.834666 6 runner.go:219] failed to plan for rollback: 'failed to build rollback instance for task 'createputrep': object name is missing'
E0209 01:40:10.864731 6 volume_endpoint_v1alpha1.go:128] failed to create cas template based volume: error 'the server could not find the requested resource'
E0209 01:40:25.986237 6 runner.go:213] failed to execute runtask: name 'jiva-volume-read-listtargetservice-default-0.8.0': meta yaml 'id: readlistsvc
E0209 01:40:25.986306 6 volume_endpoint_v1alpha1.go:175] failed to read cas template based volume: error 'controller service not found'
E0209 01:40:26.118368 6 runner.go:213] failed to execute runtask: name 'jiva-volume-create-putreplicadeployment-default-0.8.0': meta yaml 'id: createputrep
E0209 01:40:26.118535 6 meta.go:495] failed to build rollback instance for task 'createputrep': object name is missing: meta task '{MetaTaskIdentity:{Identity:createputrep Kind:Deployment APIVersion:extensions/v1beta1} MetaTaskProps:{RunNamespace:vadosware-blog Owner: ObjectName: Options: Retry: Disable:false} Action:put RepeatWith:{Resources:[] Metas:[]}}'
E0209 01:40:26.118547 6 runner.go:219] failed to plan for rollback: 'failed to build rollback instance for task 'createputrep': object name is missing'
E0209 01:40:26.164701 6 volume_endpoint_v1alpha1.go:128] failed to create cas template based volume: error 'the server could not find the requested resource'
E0209 01:43:58.342318 6 volume_endpoint_v1alpha1.go:219] failed to delete cas template based volume: error 'storageclasses.storage.k8s.io "openebs-jiva-1r" not found'
E0209 01:43:58.342323 6 volume_endpoint_v1alpha1.go:219] failed to delete cas template based volume: error 'storageclasses.storage.k8s.io "openebs-jiva-1r" not found'
E0209 01:43:58.468997 6 volume_endpoint_v1alpha1.go:219] failed to delete cas template based volume: error 'storageclasses.storage.k8s.io "openebs-jiva-1r" not found'
If we restart the Node Disk Manager, the missing storage class errors go away (it’s my fault for deleting them in the first place, I wanted to reduce the noise since the non-HA one is the only one I use):
E0209 01:51:36.096494 6 runner.go:213] failed to execute runtask: name 'jiva-volume-read-listtargetservice-default-0.8.0': meta yaml 'id: readlistsvc
E0209 01:51:36.096565 6 volume_endpoint_v1alpha1.go:175] failed to read cas template based volume: error 'controller service not found'
E0209 01:51:36.212545 6 runner.go:213] failed to execute runtask: name 'jiva-volume-create-putreplicadeployment-default-0.8.0': meta yaml 'id: createputrep
E0209 01:51:36.212706 6 meta.go:495] failed to build rollback instance for task 'createputrep': object name is missing: meta task '{MetaTaskIdentity:{Identity:createputrep Kind:Deployment APIVersion:extensions/v1beta1} MetaTaskProps:{RunNamespace:vadosware-blog Owner: ObjectName: Options: Retry: Disable:false} Action:put RepeatWith:{Resources:[] Metas:[]}}'
E0209 01:51:36.212717 6 runner.go:219] failed to plan for rollback: 'failed to build rollback instance for task 'createputrep': object name is missing'
E0209 01:51:36.258837 6 volume_endpoint_v1alpha1.go:128] failed to create cas template based volume: error 'the server could not find the requested resource'
Since I don’t know exactly what’s wrong looks like I’ll need to consider all of these. It looks like a bunch of stuff I deleted (storageclasses I didn’t use, etc) are missing, so what I’ll do is re-create the DS since that is what makes them. This did cut down on the errors, but I think I’ve found the issue – this is something that actually has bitten me locally a BUNCH, here’s the important line:
E0209 01:51:36.212706 6 meta.go:495] failed to build rollback instance for task 'createputrep': object name is missing: meta task '{MetaTaskIdentity:{Identity:createputrep Kind:Deployment APIVersion:extensions/v1beta1} MetaTaskProps:{RunNamespace:vadosware-blog Owner: ObjectName: Options: Retry: Disable:false} Action:put RepeatWith:{Resources:[] Metas:[]}}'
So there are a few issues here:
Deployment
and the apiVersion
is extensions/v1beta1
– this was removed in Kubernetes 1.16!ObjectName
is obviously empty – it may actually be required in newer versions of kubernetesI’m thinking that I’ll have to upgrade OpenEBS now, clearly the version I’m using is far too old (I’ve upgraded the cluster recently and didn’t upgrade OpenEBS to match). While there is documentation for upgrading, I’m on such an old version 0.8.0
(which was supposedly not released based on the logs of the API server) that it would be insane to try going from 0.8.0
to 1.6.0
(the current version) all in one jump. Instead I looked into the Kubernetes changelog for 1.16 to see if there was a way to allow the use of extensions/v1beta1
, and there is!
Serving these resources can be temporarily re-enabled using the –runtime-config apiserver flag.
apps/v1beta1=true apps/v1beta2=true extensions/v1beta1/daemonsets=true,extensions/v1beta1/deployments=true,extensions/v1beta1/replicasets=true,extensions/v1beta1/networkpolicies=true,extensions/v1beta1/podsecuritypolicies=true
Since I’m on 1.16 it looks like this is going to be the least risky path – Since I only need it for Deployment
s it looks like extensions/v1beta1/deployments=true
is the value I need to pass in. To change this I needed to get to the manifest for the API server (/etc/kubernetes/manifests/kube-apiserver.yaml
) and edit it manually:
# ... other yaml ... #
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
- --runtime-config='extensions/v1beta1/deployments=true'
# ... other yaml ... #
Then I restarted the API service… Only to have it crash immediately, since I didn’t need the single quotes. Once I found an example of what this value should be set to I fixed it (removing the quotes) and eventually I could see kube-apiserver
pod running with crictl
(I use and am very happy with containerd
):
root@ubuntu-1810-cosmic-64-minimal ~ # crictl --config=/root/.crictl.yaml ps | grep kube-api
128d874d00184 3722a80984a04 23 seconds ago Running kube-apiserver 0 c491f65d2f5a8
…AND IT WORKED, PVCs all got bound properly:
$ k get pvc -n vadosware-blog
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mailtrain-mailtrain Bound pvc-a28ec786-0d8a-41cc-a662-0bdb4f5194ea 8Gi RWO openebs-jiva-non-ha 4m47s
mailtrain-mariadb Bound pvc-7bf719a8-8a82-471f-a9c9-429c0aae5d3e 8Gi RWO openebs-jiva-non-ha 5m7s
mailtrain-mongodb Bound pvc-33c5baee-aefd-40b9-bf6a-c165c222ab88 8Gi RWO openebs-jiva-non-ha 4m59s
mailtrain-redis-data Bound pvc-581856fd-b137-4139-a753-6b6c1cfa1f7c 1Gi RWO openebs-jiva-non-ha 7m37s
Now we can get back to our regularly scheduled programming – installing mailtrain!
Let’s test mailtrain out locally with kubectl port-forward
before we deploy it to the internet:
$ k port-forward mailtrain-mailtrain-6dcb65c554-vqrhb 3000 -n vadosware-blog
Forwarding from 127.0.0.1:3000 -> 3000
Forwarding from [::1]:3000 -> 3000
And through the magic of going back and editing this post until everything was right, it worked the first time!
After logging in with username “admin” and password “test” (this is what’s configured for the docker container) and updating the credentials, I was good to go.
Some bits of the mailtrain site (like the content editing page of a campaign) actually use subdocuments (frames) so you’ll need to do more port forwarding so that it doesn’t break!
$ k port-forward mailtrain-mailtrain-6dcb65c554-vqrhb 3003 -n vadosware-blog
Forwarding from 127.0.0.1:3003 -> 3003
Forwarding from [::1]:3003 -> 3003
Now that everything seems to be working, let’s add the Ingress that will allow traffic into our mailtrain instance. I’m using cert-manager
, one of the most useful operators that’s ever been created, so all I have to do is make an ingress and a HTTPS cert will be made for me. As discussed earlier we’ll only be exposing the public endpoint:
generated/~g_v1_ingress_mailtrain-mailtrain.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/limit-rps: "20"
ingress.kubernetes.io/proxy-body-size: 25m
ingress.kubernetes.io/ssl-redirect: "true"
kubernetes.io/ingress.class: traefik
kubernetes.io/tls-acme: "true"
labels:
app: vadosware-mailtrain
name: mailtrain-mailtrain
namespace: vadosware-blog
spec:
rules:
- host: mailtrain.vadosware.io
http:
paths:
- backend:
serviceName: mailtrain-mailtrain
servicePort: 3004
tls:
- hosts:
- mailtrain.vadosware.io
secretName: vadosware-mailtrain-tls
Awesome, now we have the part of mailtrain that should be accessible to the outside world… We actually won’t even have to provide this Ingress, as a result of the following bonus section…
One of the things that anyone who runs a mailing list is going to want to do is be able to drop a small call-to-action on their page letting people sign up for the mailing list. Though mailtrain does have a “subscription form” that’s generated and available at <mailtrain public address>/subscription/<mailing list id>?locale=en-US
which you could put in an appropriately sized <iframe>
and call it a day, usually you want to make your own form close to the content.
Since right now mailtrain doesn’t actually support exposing a /subscribe
endpoint that you can POST
to with an HTTP form. To get this functionality we could write a small service that does the following:
/subscribe
endpointPretty much every time I start a small project these days I consider whether to use Rust or Haskell. While the functionality I want to add is easily a <100 line script, I’m going to use Rust because Rust is amazing (and Haskell feels too heavy-weight). I’m going to go with Rust for this project, since it’s been a while since I wrote some Rust and rust is awesome.
In mailtrain v2 beta there’s actually a page describing how to use the API and giving you a way to generate an access token:
POST /api/subscribe/:listId – Add subscription
This API call either inserts a new subscription or updates existing. Fields not included are left as is, so if you update only LAST_NAME value, then FIRST_NAME is kept untouched for an existing subscription.
Query params
access_token – your personal access token
POST arguments
EMAIL – subscriber's email address (required) FIRST_NAME – subscriber's first name LAST_NAME – subscriber's last name TIMEZONE – subscriber's timezone (eg. "Europe/Tallinn", "PST" or "UTC"). If not set defaults to "UTC" MERGE_TAG_VALUE – custom field value. Use yes/no for option group values (checkboxes, radios, drop downs)
Additional POST arguments:
FORCE_SUBSCRIBE – set to "yes" if you want to make sure the email is marked as subscribed even if it was previously marked as unsubscribed. If the email was already unsubscribed/blocked then subscription status is not changedby default. REQUIRE_CONFIRMATION – set to "yes" if you want to send confirmation email to the subscriber before actually marking as subscribed
Example
curl -XPOST ‘http://localhost:3000/api/subscribe/B16uVTdW?access_token=ACCESS_TOKEN’
–data ‘EMAIL=test@example.com&MERGE_CHECKBOX=yes&REQUIRE_CONFIRMATION=yes’
So it’s relatively simple, all we need is a small utility that adds on the acess token (and use sthe right subscription ID). Here’s a chunk of the main code:
impl_web! {
impl MailtrainFormPost {
/// Subscription endpoint
#[post("/subscribe")]
#[content_type("text/html")]
fn subscribe(&self, mut body: SubscribeForm) -> Result<String, ()> {
// Override the settings for forcing subscription/requiring confirmation
// in the future maybe we'll allow this to come directly from the POST request
body.force_subscribe = Some(String::from(if self.mailtrain_force_subscribe { "yes" } else { "" }));
body.require_confirmation = Some(String::from(if self.mailtrain_require_confirmation { "yes" } else { "" }));
// Pull the sub information
let email = body.email.clone();
let maybe_sub_info = serde_json::to_value(body);
if let Err(_err) = maybe_sub_info {
return Ok(String::from("Bad request, invalid JSON"));
}
let sub_info = maybe_sub_info.unwrap();
// Build full URL for subscribe POSt
let url = format!(
"{}/subscribe/{}?access_token={}",
self.mailtrain_api_url,
self.mailtrain_list_id,
self.mailtrain_api_key,
);
// Perform the request
debug!("Sending sub -> {}", &sub_info);
let resp = ureq::post(url.as_str()).send_json(sub_info);
// If an error happened then return a generic error
if !resp.ok() {
match resp.into_string() {
Ok(body_str) => {
warn!("Mailtrain request failed -> {}", body_str);
warn!("Hint: are the incoming email address properly url encoded?");
}
Err(err) => warn!("Failed to convert response body for printing...: {}", err),
}
return get_asset_contents("subscription-failed.html");
}
debug!("Successfully subscribed email [{}]", email);
// Retrieve the compiled-in asset
return get_asset_contents("subscription-successful.html");
}
}
}
This code is using a rust web framework called tower-web. You can read (all) of the code at mailtrain-subscribe-form-post-sidecar.
With this “sidecar” container in place, I can dramatically reduce mailtrain’s exposure to the internet (bye bye Ingress
), but I can still enable people to subscribe to the mailing list.
I happen to use Traefik as my ingress controller (Traefik Kubernetes Ingress Controller), so I had to work with the Ingress
annotations to get what I wanted:
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: blog
namespace: vadosware-blog
annotations:
ingress.kubernetes.io/ssl-redirect: "true"
ingress.kubernetes.io/limit-rps: "20"
ingress.kubernetes.io/proxy-body-size: "10m"
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.class: "traefik"
# The line below is *IMPORTANT*, we want '/mailing-list' to get stripped
# (and of course, it doesn't matter if '/' gets stripped since it must be re-added)
traefik.frontend.rule.type: PathPrefixStrip
spec:
tls:
- hosts:
- "www.vadosware.io"
- "vadosware.io"
secretName: vadosware-blog-blog-tls
rules:
- host: "www.vadosware.io"
http:
paths:
- path: "/"
backend:
serviceName: blog
servicePort: 80
- host: "vadosware.io"
http:
paths:
- path: "/"
backend:
serviceName: blog
servicePort: 80
- host: "vadosware.io"
http:
paths:
- path: "/mailing-list"
backend:
serviceName: blog
servicePort: 5000
Welp, this is probably one of the first major pain points in Rust I’ve come across. To be fair it is a pain point for many languages and linux in general, but I was really hoping that Rust might have a better answer than other languages I’ve come across. Another point is that I’ve done nothing to make it better so I’m just as much to blame as anyone else.
What’s frustrating is, Rust actually does have good support for building locally via --target x86_64-unknown-linux-musl
(you don’t even have to build an alpine container and compile everything in there), but the big problem is that the x86_64-unknown-linux-musl
target does not support macros. I’ve been wary of macros in the past (in lisp at the time) and now they’ve come to bite me (though of course they’re somewhat different and this isn’t purely the fault of macros) – macros in rust currently force a dynamic link to be done at runtime. Here’s what I did, ultimately failing, to try and build this program statically so it was simple enough to deploy from a docker scratch
image:
--target x86_64-unknown-linux-musl
, run into the macro issueserde
– I ended up going down the rabbithole of writing my own Serialize
/Deserialize
trait implementationstower-web
and others like rust-embed
), leading to trying to patch features
out of cargo files using replace
in Cargo.toml (which didn’t work)cargo vendor
everything into my source code and make modifications to features
sections and packages manuallyhyper
directly (then quickly realizing that hyper
relies on pin-project
)libnss
/nss
problem yet (I was eyeing nsss
as a solution)Another language I happen to use a lot, Haskell, doesn’t have this problem because they have a somewhat different way of generating things like JSON – Generic programming. While we can’t expect every system to be Haskell’s, I was able to build static binaries in haskell (with great effort there too) in the past, and this is likely one of the reasons for me being able to do so relatively easily. Haskell did suffer from the libnss
/nss
problem, but I never got a chance to try to fix it with nsss
since I ended up just switching to Fedora (and giving up on static builds there too).
Anyway, things weren’t too hard once I just gave up on static compilation of the rust binary – the container is plenty small anyway and I wanted to eventually actually ship (which meant producing this blog post and of course deploying the actual functionality).
maille
A noted in the TL;DR, this blog is using the mailtrain instance! I needed to update the blog with a call to action so I’ve actually added a “molecule” to maille
(a Mithril component library I wrote) for email call-to-actions for use at the bottom of this page.
I did a bad thing and added the entire component to a patch version of maille
(so I could rush it out for this post), so feel free to check out the documentation section MailingListCTA
component. I figure most might be more interested in what the usage looks like so here it is:
<head>
<!-- ... some other html -->
<link rel="stylesheet" href="/vendor/maille/maille.min.css">
<link rel="stylesheet" href="/vendor/maille/mailing-list-cta.shared.min.css">
<style>
section#mailing-list-cta-container .maille.component.mailing-list-cta {
margin-top: 2em;
box-shadow: 1px 10px 10px #CCC;
padding: 1em;
width: 100%;
}
</style>
<!-- ... some other html -->
</head>
<body>
<!-- ... some other html -->
<section id="mailing-list-cta-container">
<div id="mailing-list-cta"></div>
<script src="/vendor/mithril/2.0.4/mithril.min.js"></script>
<script src="/vendor/maille/mailing-list-cta.shared.min.js"></script>
<script>
var MailingListCTA = MAILLE_MAILING_LIST_CTA.default;
// Render the component
document.addEventListener("DOMContentLoaded", function() {
m.render(
document.querySelector("div#mailing-list-cta"),
m(
MailingListCTA,
{
brief: "Want to get an email whenever I add more posts like this one?",
subscribeForm: {action: "/mailing-list/subscribe"}
},
),
);
});
</script>
</section>
<!-- ... some other html -->
</body>
This CTA is now at the bottom of every post (this blog is powered by Hugo, using a theme I modified somewhat heavily – I should open source it one day…).
It was a blast to set up mailtrain and write the associated code, hopefully you’ve had some fun and learned something along with me – if you have, sign up to the mailing list below!
I mention it all the time, but one of the greatest things about running a kubernetes cluster is that although the initial complexity is intense, once you’re past it and at a relative steady state, deploying new complicated software to your cluster in a repeatable, safe manner is really easy!