Level 1 Automated K8S Deployments With GitLab CI

Kubernetes logo + GitLab logo

tl;dr - Automate your deployments without adding a new reconciliation loop (i.e. Flux or ArgoCD), make a limited-permission ServiceAccount, drop your credentials in a GitLab protected variable and set up some CI steps to build your containers (with CI-powered Docker in Docker) and update your deployments.


DevOps has been growing increasingly important and complicated these days. There’s a wave of new approaches, enthusiasm, companies, and organizations trying to make things to achieve consistency and advanced features in the operations world. It feels a bit like the Javascript crowd (famous for innovation and churn) has discovered the world of the sysadmins. Obligatory link to the CNCF landscape.

It doesn’t always have to be cutting edge all the time though – I’d like to lay out in this post how I deploy this blog, a basically low-tech (and pretty reliable) way of deploying some application that isn’t technically challenging to operate.

Since my current cluster orchestrator of choice is Kubernetes (as opposed to “none”, or Nomad, Ansible, etc), there will be some incidental complexity here but the important pieces are simple:

At this point keen readers will note that we’re absolutely still on some defintion of cutting edge and I’ve lied through my teeth. It’s not a complete lie because out there on the cutting edge there’s another cutting edge, which is adding a reconciliation loop (the whole point of Kubernetes, essentially) to your code itself, with tools like Flux and ArgoCD. So the cutting edge I’m avoiding is that second one – those who are looking for a guide to setting up automated deployment from CI with SSH or Ansible (which are absolutely valid ways to do deployments!), this isn’t the post for you.

It feels like too many people skip the step of doing simple CI things in their automation journey so I thought it was worth sharing. You can get very far with just a little bit of CI magic. There is unfortunately quite a bit of setup required in the background (having a Kubernetes cluster, making a ServiceAccount on the k8s side, clicking around GitLab’s API) but it’s a once-and-done so I feel it’s worth the tradeoff. Of course, if you’re still SSHing in and deploying the general scheme of this approach will still work for you – just replace all the Kubernetes stuff with concepts relevant to your deployment strategy (ServiceAccount credentials -> SSH creds, kubectl apply -> systemctl start, etc).

Anyway, enough preamble, let’s get started.


Step 0: Have a Kubernetes cluster

Easier and easier these days to get a k8s cluster up and running, a few options:

  • k0s (my personal favorite, my cluster runs on it)
  • k3s
  • kubeadm
  • kind (Kubernetes IN Docker, usually for local dev)
  • minikube (usually for local dev)

Step 1: Create a namespace-local ServiceAccount with sufficiently limited permissions

As you probably don’t want to award an all-powerful ServiceAccount to your CI, here’s the YAML to make one that has some reasonable permissions:


apiVersion: v1
kind: ServiceAccount
  name: ci
  namespace: your-namespace


apiVersion: rbac.authorization.k8s.io/v1
kind: Role
  name: ci
  namespace: your-namespace
  - apiGroups:
      - ""
      - apps
      - extensions
      - autoscaling
      - deployments
      - services
      - get
      - list
      - create
      - update
      - patch

  # cert-manager resources
  # (you don't need this if you don't run Cert-Manager)
  - apiGroups:
      - cert-manager.io
      - certificates
      - get
      - list
      - create
      - update

  # traefik resources
  # (you don't need this if you don't run Traefik)
  - apiGroups:
      - traefik.containo.us
      - middlewares
      - ingressroutes
      - ingressroutetcps
      - ingressrouteudps
      - get
      - list
      - create
      - update

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
  name: ci
  namespace: your-namespace
  kind: Role
  name: ci
  apiGroup: rbac.authorization.k8s.io
- kind: ServiceAccount
  name: ci

The above YAML says more about my setup than it would about a general one, but it should be easy to extrapolate what you’d need to match a more traditional setup. For example, I use Traefik so I don’t actually deal in Ingress but rather IngressRoute objects. I also use the Certificate CRD offered by Cert-Manager because it allows me to decouple TLS certificate generation from serving traffic (Cert-Manager is one of the most useful tools in the ecosystem, if you’ve just learned about it through this blog post, you are welcome).

Step 2: Create ServiceAccount credentials to use with kubectl

We’re going to abuse the system a bit here, and use the ServiceAccount to access the cluster from outside the cluster (rather than a pod inside the cluster), by exporting the data into a set of credentials. Since doing it was really annoying I’ve automated it:

ci-kubeconfig: generate
  export SECRET_NAME=$(shell $(KUBECTL) get secrets -n your-namespace | grep blog-ci | cut -d' ' -f1) && \
  export CA=`$(KUBECTL) get secret $$SECRET_NAME -o jsonpath='{.data.ca\.crt}'` && \
  export TOKEN=`$(KUBECTL) get secret/$$SECRET_NAME -o jsonpath='{.data.token}' | base64 --decode` && \
  export NS=`$(KUBECTL) get secret/$$SECRET_NAME -o jsonpath='{.data.namespace}' | base64 --decode` && \
  mv /tmp/generated.kubeconfig.yaml ../secrets/ci.kubeconfig.yaml

And here’s what’s in write-serviceaccount.bash:


echo -e "---
apiVersion: v1
kind: Config
- name: default-cluster
    certificate-authority-data: ${CA}
    server: https://<your cluster url goes here>:6443
- name: default-context
    cluster: default-cluster
    namespace: default
    user: default-user
current-context: default-context
- name: default-user
    token: ${TOKEN}
" > /tmp/generated.kubeconfig.yaml

It actually took a surprising amount of time to get the automation for spitting out service account creds just right – there’s almost certainly a Krew plugin that would do it, but Kubernetes has always been a bit weird about managing users (pushing people towards OIDC auth, but providing ServiceAccounts and creds-via-X.509-certs which are kind of similar).

Before you go recklessly storing secrets in your repository, make sure to jump to the end and read the section on storing secrets at the end of this post!!

Step 3: Add ServiceAccount credentials as GitLab as a protected variable

To be able to run kubectl from CI we’re going to need access to the credentials we’ve just generated (@ secrets/ci.kubeconfig.yaml) for the ServiceAccount we created in the previous step. This is relatively easy to do but does require some clicking around in GitLab’s UI, if you don’t want to start mucking about with automation for that too (GitLab has an API so it is possible…).

The UI is pretty easy on the eyes – as of this post it looked like this:

Gitlab CI Variables UI

Step 4: Setup GitLab CI in your actual repository

Here’s the GitLab CI YAML that powers this very blog:

image: alpine:latest

  - build
  - deploy

    - kubectl
    - kustomize

  stage: build
  image: alpine:latest
    expire_in: 1 week
      - vadosware/dist
    - apk add hugo make
    - make build

  stage: deploy
  image: docker
    - docker:dind
    - main
    KUBE_VERSION: v1.20.0
    KUBECTL: kubectl --v=2
    - job: build
      artifacts: true
    # Install kubectl (use cached version otherwise)
    - apk add curl make hugo gettext git openssh
    - |
      if [ ! -f kubectl ] ; then \
        curl -LO "https://storage.googleapis.com/kubernetes-release/release/$KUBE_VERSION/bin/linux/amd64/kubectl"; \
        chmod +x kubectl;
    - cp kubectl /usr/bin/kubectl
    # Install kustomize
    - |
      if [ ! -f kustomize ] ; then \
        curl -LO "https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/${KUSTOMIZE_VERSION}/kustomize_${KUSTOMIZE_VERSION}_linux_amd64.tar.gz"
        tar -xvf kustomize_${KUSTOMIZE_VERSION}_linux_amd64.tar.gz
    - cp kustomize /usr/bin/kustomize
    # Docker login
    - docker login -u gitlab-ci-token --password $CI_BUILD_TOKEN registry.gitlab.com
    - make image publish deploy-k8s-ci

Not particularly optimal (I could have baked the various binaries into a pre-built image), but it works and I don’t think about it. There’s also some caching so I’m not being a completely bad net citizen by downloading kubectl and kustomize every once in a while. The reason I download kustomize specifically is because it was in the case in the past (and is likely to be in the future) that the slight lag between kustomize and kubectl releases can contribute to some features not being present when using the kustomize bundled with kubectl.

To test the branches and modify the gitlab-ci.yml file, I generally test changes like this in a branch, and what I do is to add the branch as a “protected” branch for a while during testing.

Extra: Bring your own Makefile targets

You probably noticed that I use make pretty extensively and there were some targets that describe what they’re doing you but don’t tell you the how. You’re going to have to bring your own Makefile for targets like image, publish, and deploy-k8s-ci but they should be pretty easy. Here’s deploy-k8s-ci for the lazy:

    $(MAKE) -C infra/kubernetes deployment svc ingress hpa

And if you follow that [recursive make][recursive-make] call:

# ... lots more makefile magic ...

GENERATED_DIR ?= generated

generate: generated-folder
    $(KUSTOMIZE) build -o $(GENERATED_DIR)/

# If the deployment is present then patch it with a deployDate label (&& case)
# If the deployment is not present then apply it for the first time (|| case)
deployment: generate
ifeq (, $(shell $(KUBECTL) get deployment $(K8S_DEPLOYMENT_NAME) -n $(K8S_NAMESPACE)))
    @echo -e "[info] no existing deployment detected"
    $(KUBECTL) apply -f $(GENERATED_DIR)/apps_v1_deployment_blog.yaml
    @echo -e "[info] existing deployment ${K8S_DEPLOYMENT_NAME} in namespace ${K8S_NAMESPACE} deployment"
    $(KUBECTL) patch deployment $(K8S_DEPLOYMENT_NAME) -n $(K8S_NAMESPACE) -p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"deployDate\":\"`date +'%s'`\"}}}}}"
    @echo -e "[info] done"

deployment-force: generate
    $(KUBECTL) apply -f $(GENERATED_DIR)/apps_v1_deployment_blog.yaml

    $(KUBECTL) apply -f svc.yaml

ingress: generate ingress-middleware certificate
    $(KUBECTL) apply -f $(GENERATED_DIR)/traefik.containo.us_v1alpha1_ingressroute_blog-http.yaml
    $(KUBECTL) apply -f $(GENERATED_DIR)/traefik.containo.us_v1alpha1_ingressroute_blog-https.yaml

    $(KUBECTL) apply -f hpa.yaml

# ... lots more makefile magic ...

The use of kustomize here is a bit old (it’s not as overlay heavy as my more recent code), but the code does work (for me, YMMV).

ASIDE: Store credentials in your repo if you want, but encrypt them

Don’t listen to people who tell you to never store your credentials in a repo. If you believe in proper encryption then you believe in storing credentials in our repo, granted that they’re properly encrypted. Check out git-crypt – it’s one of the simplest implementations of this that I’ve seen and the way it integrates (as a git filter) is really easy to use. On the more enterprise-ready side there is SOPS by the heroes over at Mozilla.

Note that if you’re using git-crypt and you’re using the symmetric key (which you’re taking and storing offline/in a password manager/somewhere nice and safe, of course!), GPG users are a fantastic way to mediate who has access. git-crypt add-gpg-user makes things really easy.


This was a pretty quick guide to my laid back deployment process, which I set up after writing a blog post in a cafe but the network I was on not allowing me to easily deploy. Rather than doing the reasonable thing and setting up my blog to run off of some shared hosting or picking something like neocities, I figured the answer to this was to set up CI on the repository. Another day, another yak shaved.

I’m still planning on going through and taking a thorough look at Flux and ArgoCD but until I do, this is the easy way I do my deployments, taking advantage of the sunk cost ease Kubernetes provides.

Like what you're reading? Get it in your inbox