Awesome FOSS Logo
Discover awesome open source software
Launched 🚀🧑‍🚀

Serving HTTP Applications on Kubernetes with Ingress

Categories

UPDATE This configuration previously contained LoadBalancer as the spec.type but it turns out that actually I don’t need to set it to LoadBalancer. Basically, LoadBalancers are for use in cloud provider environments, and create their own ingresses according to the documentation. This was pointed out to me by Thomas Barton who came across this post on HackerNews and I wanted to of course pass the information on. Check out the section with the configuration for the changes and a small explanation.

As the logical next step to my recent blog series that chronicled the set up of my small single node kubernetes cluster running on coreos, I’m going to explore what it took (well the notes on what it took) to set up an HTTP application on my Kubernetes cluster. The blog series left off at the point where I had a single node Kubernetes cluster running on CoreOS completely set up, and now the real work is up next – porting all my applications and various infrastructure pieces to the new infrastructure.

As far as I’m concerned, the easiest pieces to port (and therefore the ones to do first for the quick endorphins) are the stateless HTTP applications. Since this was the first time I was porting an existing application to run on Kubernetes, I also do a lot of reading the manual (which you’re about to find out).

While getting a container up and running is generally as easy as just kubectl run, there is a little bit more to explore here, especially considering Kubernetes’ new(ish?) Ingress related resources.

Step 0: RTFM

Of course, with any new project, the first step is to RTFM. The general Kubernetes documentation is of course a good start, and I found the following links all helpful in specific (in relevance order, first to last):

QUICK TIP cAdvisor is a basic resource monitoring tool that’s automatically available on your cluster, here’s a short guide on how to access it:

  • kubectl get pods --all-namespaces, or kubectl get pods --namespace=kube-system (Find your api server’s pod)
  • Copy the full name of the api server’s pod
  • kubectl port-forward kube-apiserver-<likely machine ip here> 8888:4194 --namespace=kube-system (port forwards port 4194 of the API server to your local host)
  • Go to localhost:8888 in your browser, and you should see cAdvisor!

Step 1: Set up a private registry

Feel free to skip this step if you already have somewhere you store your docker images – I personally chose to store my images on Gitlab’s Container Registry. Gitlab’s container registry is free with repositories (or if you host gitlab yourself), and it works great for me.

This step seems pretty simple, but is emblematic of a operations paradigm shift for me. My “process” now looks like this:

  1. Write code
  2. Build a docker image
  3. Push the docker image to my private repository
  4. Update Kubernetes resource configuration for the service/app
  5. Apply Kubernetes configuration (triggering a deployment of the app)

This step is basically equivalent to #3 in that listing – to make this new deployment process work, you certainly need to have a private repository that you can pull images from.

The documentation for how to add a private registry is pretty straight forward and useful and I was able to set up the registry with little to no issues (thanks of course to Gitlab as well which has great documentation/ergonomics).

I found that using a blog project (I actually used this blog you’re looking at now) as the guinea pig for testing out this new process was very useful. Since the application is pretty simple, I can focus on adjusting myself to the process and making sure it’s streamlined, rather than worrying about a lot of other pieces of infrastructure (like a database or caching layer, per say).

Step 2: Make the simplest Deployment + Service you can muster

After getting a private registry set up and enabling Kubernetes to pull images from it, the next step is to start setting up a Kubernetes resource that pulls from the private registry. After deploying it, you can use the usual kubectl port-forward command to port forward in and make sure the container is running.

Here’s a simple configuration for the deployment of this blog:

---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: vadosware-blog-deployment
spec:
  replicas: 1
  template:
    metadata:
      labels:
        env: prod
        app: vadosware-blog
    spec:
      imagePullSecrets:
        - name: vadosware-gitlab-registry
      containers:
      - name: vadosware-blog
        image: registry.gitlab.com/my/private/gitlab/repo/vadosware-blog:latest
        imagePullPolicy: Always
        ports:
        - containerPort: 80

Pretty basic stuff there, a standard Kubernetes Deployment (remember deployment = pod + replicaset, roughly). You should be able to apply this configuration with a command like kubectl apply -f <path-to-the-file>.yaml and run kubectl get pods and see the deployment of your pod. Assuming you can access the basic deployment, I often just double check that it’s doing what I expect by port-forwarding to it (kubectl port-forward <instance-name-of-pod> 8888:<whatever port its doing stuff on>).

To expose this blog inside the cluster (and eventually outside the cluster), the next step is to hook up a Kubernetes Service to the existing deployment. Here’s what mine looks like:

---
kind: Service
apiVersion: v1
metadata:
  name: vadosware-blog-svc
spec:
  type: ClusterIP
  selector:
    app: vadosware-blog
  ports:
    - name: http
      protocol: TCP
      port: 80

UPDATE This configuration previously contained LoadBalancer as the spec.type but it turns out that actually I don’t need to set it to LoadBalancer. Basically, LoadBalancers are for use in cloud provider environments, and create their own ingresses. Here’s a paraphrased version of the explanation that Thomas sent to me:

ClusterIP = base (IP for the service internal to Kubernetes)

NodeIP = ClusterIP + Port exposed on every node (IP for the service internal to Kubernetes, plus expose the same port on every node that routes to the ClusterIP)

Loadbalancer = ClusterIP + NodeIP + cloud magic (Set up an ingress-type resource with cloud stuff.

Despite my misconfiguration here (which I’ve corrected now, the spec.type used to be LoadBalancer), the previous configuration works, because of the way in which these options layer on top of each others. Back to our regularly scheduled programming…

Pretty basic service definition there – nothing new (if you’re familiar with Kubernetes resource configurations). You can test that the service is up and running with kubectl get svc, and get more detailed information about it with kubectl describe svc <service-name>.

Next, is the more interesting bit of this post, the Ingress:

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: vadosware-blog-ing
  annotations:
    ingress.kubernetes.io/class: "nginx"
    ingress.kubernetes.io/ssl-redirect: "false"
    ingress.kubernetes.io/limit-rps: "20"
spec:
  rules:
  - http:
      paths:
      - backend:
          serviceName: vadosware-blog-svc
          servicePort: 80

This is the configuration for something Kubernetes calls an Ingress. The job of an Ingress is to connect the internet to pods (exposed by services) inside your cluster. It turns out Ingresses (and all the machinery that makes them work) aren’t quite configured/set-up automatically (at least when I was using Kubernetes), and we’re going to go in to how to set that up next.

Step 3: Set up an ingress resource + controller for your Kubernetes cluster

Before Kubernetes Ingress resources can do anything (it’s possible to create them without everything installed properly, they’ll just won’t do anything), the Kubernetes cluster has to have something that’s called an “Ingress Controller” set up for it (much like replication would require a “Replication Controller”). Here’s how I got on installing a Ingress Controller (and figuring out how everytihng was supposed to work together at all:

Step 3.1: Get some backgrounds/examples on Ingress controllers

I found a github repo that contained the example that used the nginx controller to be very useful (it’s from the general examples repo). While overwhelming at first in that it explains a lot by assuming you have it already set up, at the end of the day you can see that an Ingress Controller is just another Kubernetes resource that you can kubectl apply -f and have it pop into existence. Once you have an Ingress Controller, you can start taking advantage of all the cool features related to Ingress.

There are lots of Ingress controllers that you could use but I went with the pretty standard NGINX Ingress Controller. Before I knew ingress controllers existed, I was planning on reverse-proxying everything through NGINX anyway, so it’s great news that they have an ingress controller that will do that from within the bounds of Kubernets for me.

Reading up on the examples and the Ingress documentation drove me to the conclusion that what I needed to do was:

  1. Create an Ingress Resource (just configuration), with how to Ingress for a particular service/app (this link to the github that contains an nginx deployment was particularly helpful). These live with/belong to a service, and they’re the rules that determine how the service can be accessed from the outside world. You can list kubernetes ingresses with kubectl get ing.
  2. Test the NGINX Ingress Controller installation to make sure it was created properly. The Ingress Controller itself actually contains a deployment, so you can test it with kubectl get deployments --namespace=kube-system and make sure that you see one prefixed with ’nginx-ingress-controller-’ (maybe different depending on what you name it/if the guide has changed).
  3. Properly configure the NGINX Ingress Controller. The NGINX ingress controller pulls configuration from a ConfigMap, and while it’s not strictly required, if you’re trying to do fancy things with NGINX it will be likely the only way to get what you need done. Check out the example ConfigMap for an idea of what this ends up looking like.

I found this older “complete” guide to also be pretty useful. I wouldn’t recommend following it completely, but it’s a good resource to look at to at least get an understanding of what’s going on (if you don’t already, at this point). The best guide to follow (I literally called it “BEST BUIDE” in my notes) for me was the nginx ingress examples repo.

Step 3.4: Create the Ingress controller

After looking at the various documentation, guides and examples above, here’s what my ingress controller looks like:

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  labels:
    k8s-app: nginx-ingress-controller
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        k8s-app: nginx-ingress-controller
      annotations:
        prometheus.io/port: '10254'
        prometheus.io/scrape: 'true'
    spec:
      # hostNetwork makes it possible to use ipv6 and to preserve the source IP correctly regardless of docker configuration
      # however, it is not a hard dependency of the nginx-ingress-controller itself and it may cause issues if port 10254 already is taken on the host
      # that said, since hostPort is broken on CNI (https://github.com/kubernetes/kubernetes/issues/31307) we have to use hostNetwork where CNI is used
      # like with kubeadm
      # hostNetwork: true
      terminationGracePeriodSeconds: 60
      containers:
      - image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.11
        name: nginx-ingress-controller
        readinessProbe:
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
        livenessProbe:
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 10
          timeoutSeconds: 1
        ports:
        - containerPort: 80
          hostPort: 80
        env:
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
        args:
        - /nginx-ingress-controller
        - --default-backend-service=$(POD_NAMESPACE)/default-http-backend

This is basically copied right out of an example, and worked like a charm for me. After kubectl apply -fing this deployment configuration, the ingress controller was running. Here’s how I checked:

$ kubectl get ingress
NAME                 HOSTS     ADDRESS        PORTS     AGE
vadosware-blog-ing   *         <machine ip>   80        51m

Immediately, I noticed a few changes:

  1. Bad TLS Certificate warning/error on the website (just visiting the IP of the server in a browser)
  2. After adding the temporary exception, I get redirected to a 503 served by NGINX. I could confirm that this was the resource controller’s NGINX because the version is actually different from what I was running before, so I could tell that it was broken (as in not proxying properly), but working (doing redirection at all)! You likely won’t get this error, since the configurations I’ve posted here are mostly after everything was working.

FIxing these issues is very speicifc to me, as it has to do with how I configured TLS (for testing purposes, you don’t need TLS, you can actually disable the SSL/TLS redirection (redirects http:// requests to https:// requests, on by default) so you can test simple containers that only expose apps over HTTP.

I had some issues using some of the annotations on my cluster (I kept running into documentation that was for Google Cloud Platform). Here’s what worked for me:

ingress.kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/ssl-redirect: "false"

A good way to check up on the annotations and whether they’re configured properly is kubectl describe ing <ingress name> (recognized annotations will be listed there).

Step 3.5: Create the default HTTP backend

The NGINX Ingress controller needs a “default http backend” which looks like this (for me):

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: default-http-backend
  labels:
    k8s-app: default-http-backend
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        k8s-app: default-http-backend
    spec:
      terminationGracePeriodSeconds: 60
      containers:
      - name: default-http-backend
        # Any image is permissable as long as:
        # 1. It serves a 404 page at /
        # 2. It serves 200 on a /healthz endpoint
        image: gcr.io/google_containers/defaultbackend:1.0
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 30
          timeoutSeconds: 5
        ports:
        - containerPort: 8080
        resources:
          limits:
            cpu: 10m
            memory: 20Mi
          requests:
            cpu: 10m
            memory: 20Mi
---
apiVersion: v1
kind: Service
metadata:
  name: default-http-backend
  namespace: kube-system
  labels:
    k8s-app: default-http-backend
spec:
  ports:
  - port: 80
    targetPort: 8080
  selector:
k8s-app: default-http-backend

This is the app that’s going to be called as a result of invalid/unspecified routes (ex. 404s) and other issues when they pop up.

Step 3.6: Validate/Debug the Ingress controller

Debugging from INSIDE the Ingress Controller

If you’re having issues with the Ingress controller, I’ve found that it’s important to remember that you can actually debug the ingress controller by just /bin/bashing INTO it!. The ingress controller is just a pod that’s running NGINX, with some code on top to automatically generate configuration to use. Here’s how I did it:

  1. kubectl get pods --all-namespaces (find the pod that’s being used for your ingress controller)
  2. kubectl exec -it -n kube-system nginx-ingress-controller-<random gibberish> /bin/bash (/bin/bash in to the pod)
  3. Inspect/check out the NGINX configuration @ /etc/nginx

By looking from inside this container, I was able to debug the NGINX configuration by looking at the endpoints that were set up, the upstream servers it was trying to reach, and everything else. You can even do things like wget or curl from inside the container to ensure that the upstream services are reachable.

Rabbit hole: A service with no endpoints

While doing initial set up, I found that although I had the blog service up, the ingress resource for the blog configured (and present), along with an ingress controller started and running, I still couldn’t properly access the blog. I did everything I could to debug, jumping into the ingress controller and looking for the endpoint it was trying to hit, doing nslookup to make sure the service name resolved over DNS, but nothing seemed to work.

After lots of debugging and checking things, and sleeping on it, I managed to read in the output of kubectl describe svc <the-service-name> that the despite existing, the service had no endpoints. A service without any endpoints is quite a problem – can’t access any pods if there are none to access.

It didn’t make sense to me that a pod could be configured, running and be pointed to by a service but the service didn’t know about the endpoint it was supposed to be accessing. Had I stumbled upon a core error in kubernetes? Unlikely. Of course, the documentation had an actual FAQ on it, and it reassured me that I was doing something wrong, rather than the other way around.The issue ended up being a typo, caused by inconsistency with how I was selecting the pods for my services. The service wouldn’t pick up the right pod because I had typed the identifier (selector) in wrong. After fixing that, it got working

At this point, everything is working! After dealing with these small issues, this “simple” HTTP serving app was properly running, with proper Ingress for the cluster. After making sure the proper ports were open and receiving traffic, I was able to access the pod through the service exposed by the ingress controller on the cluster, from the open internet (there’s a mouthful).

I was quite surprised when my simple service (the blog) didn’t seem to be gettting an actual IP – the EXTERNAL-IP column of the kubectl get services output would be <pending> and would worry me, but it turns out that’s OK (my stuff works just fine despite this).

Reflections on getting everything working

I went through a lot of resources in trying to figure out everything that was happening. Here’s a list:

Hopefully you’ll find some of these resources useful – It seems like a lot, but I’ve hard harder times setting up infrastructure pieces like Postfix which also has great documentation, but when an issue arrives it’s very hard to track down the concepts and changes that are needed to fix it.

After getting this basic app working with Ingress, I’m pretty happy with the reproducability of the set up and the consistency that I’ll be able to take advantage of when working on new projects. I’m particularly excited about the name based virtual hosting feature, since I often use one machine for multiple sites, and doing it this way is much better than manually managing NGINX configurations (which is what I did up until now). I know it’s a bit of hyperbole to say that its “much better” since in the end I put some configuration into a yaml file instead of into a nginx configuration file, and I use kubectl apply instead of ansible-playbook or scp – but the fact that all these concerns are taken care of together, and consistently by using the “Kubernetes way” is a nice improvement over how I was deploying before. Ansible got me part of the way, lifting me out of SSH-in-and-do-stuff-manually land, but Kubernetes is likewise a welcome move to another level of abstraction, with lots more built in for me.