Awesome FOSS Logo
Discover awesome open source software
Launched 🚀🧑‍🚀

Fresh Dedicated Server to Single Node Kubernetes Cluster on CoreOS, Part 3: Setting up essential Kubernetes addons

Categories

This is the third in a series of blog posts centered around my explorations and experiments with using Kubernetes and CoreOS to power my own small slice of infrastructure. Check out the previous posts:

  1. Part 1 (Setting up the server with CoreOS)
  2. Part 2 (Getting Kubernetes running)
  3. Part 3 (Setting up essential Kubernetes addons) (this post)

tl;dr - Kubernetes has some pretty important addons like DNS and Dashboard, here I go through deploying them, and my thought process as I debugged issues. Skim through for the resource configurations that worked (the final versions are the ones that are posted here, after fixes/debugging sessions).

Part 2 left off at the point where I’d just gotten the Kubernetes single-node cluster up and running, ran some small tests to make sure the API server was running, and set up kubectl to access the master node remotely from my home machine. If you’re following along at home and not there yet, you might want to go back to the other posts and see if they’ll help, or check out the step-by-step CoreOS + Kubernetes documentation directly.

One of the biggest reasons I wanted something above just docker-containers-running-on-a-machine is that I really want a better solution for intra-container networking, using DNS if possible. In my previous infrastructure iteration, I’d run a bunch of containers on the machine, and then at container creation time, do some shell commands to suss out the host machine IP, and pass it to the container as an environment variable to use – this is how my containers would access other services, like systemd-managed databases – Postgres, RethinkDB, and whatever else. In the container-driven future, surely people aren’t doing hacks like this to access their databases/other services… Maybe their databases are even in containers too!

SIDEBAR: My thoughts on database processes in containers People were (maybe still are) skiddish when it comes to running databases in containers, and I struggled with the concept too in a vague way for a long time as well, as it just “felt” wrong. However, I’ve never seen a convincing reason as to why not. I’ve come to the conclusion that’s it’s OK in the majority of cases I deal with to run databases in containers, and here’s my reasoning: – at the end of the day, a database is just a program that manages some in-memory or on-disk data, and while there may be some performance dings from filesystem and resource indirection that containers necessitate, I think the tradeoff is well worth it for the isolation you get in return. Databases are often set up manually on the servers they run on and treated very much as pets (as opposed to cattle), and I’d like to stop doing that. Databases are just applications that serve a different purpose and the less we treat them like snowflakes the closer we get to building infrastructure that “just works” consistently and uniformly. Of course, YMMV, figure out what’s right for your own infrastructure needs. At this point, I haven’t seen any convincing arguments against running databases in containers (and I can think of at least one benefit – restricting the resources available to the database so it can’t just run away with your memory), so until I do I’ll be lightly holding this opinion.

Docker Compose offers a solution to this in that it creates some networks and dns entries by default. Docker Compose is a good step up for orchestration compared to vanilla docker and I was pretty impressed the first time I saw it. It was the obvious next step in the docker ecosystem even then but I was pleasantly surprised by the ergonomics of the interface, being just some yaml files and covering most of the use cases necessary to start a wide variety of bundled-together apps. Unfortunately I’ve had my problems with Docker Compose starting and shutting down cleanly (along with the containers it’s started) on some different distributions, so I never quite started using it extensively. It’s also entirely possible that I was just using it wrong the entire time or something, but I kept doing simple straight forward steps and finding that I’d have some containers left over for particularly large apps that involved a bunch of different parts. Originally I planned to just run all my services with Docker Compose, but found it’s introspection features and controls sort of lacking/uncomfortable. Either way, Kubernetes offers what Docker Compose offered and more, all in what I find to be a very self-consistent ecosystem. So there’s a bit on why the motivation to add the DNS add-on

OK, so let’s get back on track, DNS is a great feature to have – an app can access the database at an addresslike like postgres://backend inside a container and it will just resolve to one (or more!) containers that can handle the request. Outside of just making configuration easier, it also allows for the possibility of round-robin load-balancing that address to access multiple backends (I’m unlikely to have to do that in the future, but eh). Along with easy DNS networking, another feature I really wanted was a great UI for administration/metrics on the platform. Not much needs to be said about the importance of at least SOME monitoring, and it turns out the Kubernetes Dashboard which I assume is the base for Google Cloud Platform’s dashboard is actually extremely easy to install an use! Kubernetes also comes in with some built-in monitoring tools which aren’t too shabby themselves.

Let’s get on to what I went through to install these two tools:

Installing the DNS addon

Step 0: Find the right documentation

While it was a little difficult to navigate the documentation and figure out where I should be looking, I went from the page describing how the DNS addon worked to the the README for the DNS addon in the addons repository, and figured out that all it takes to set up the DNS service is to run the system-level resource (including the service, deployment, etc) in your kubernetes cluster. I mean this to say that you just need to write a kubernetes resource config (think of the yaml that powers deployments, pods, configmaps, everything basically) and apply it, and the DNS service should start running and enriching other containers.

Step 1: Building the right resource configuration

Here’s what mine looks like (I’ve basically just copied all of the pieces from the addons/dns repository section into one file for easy kubectl apply -fing):

# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# Should keep target in cluster/addons/dns-horizontal-autoscaler/dns-horizontal-autoscaler.yaml
# in sync with this file.

# Warning: This is a file generated from the base underscore template file: kubedns-controller.yaml.base

apiVersion: v1
kind: ConfigMap
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    addonmanager.kubernetes.io/mode: EnsureExists

---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile

---

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  # replicas: not specified here:
  # 1. In order to make Addon Manager do not reconcile this replicas parameter.
  # 2. Default is 1.
  # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    rollingUpdate:
      maxSurge: 10%
      maxUnavailable: 0
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      tolerations:
      - key: "CriticalAddonsOnly"
        operator: "Exists"
      volumes:
      - name: kube-dns-config
        configMap:
          name: kube-dns
          optional: true
      containers:
      - name: kubedns
        image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.4
        resources:
          # TODO: Set memory limits when we've profiled the container for large
          # clusters, then set request = limit to keep this container in
          # guaranteed class. Currently, this container falls into the
          # "burstable" category so the kubelet doesn't backoff from restarting it.
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        livenessProbe:
          httpGet:
            path: /healthcheck/kubedns
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /readiness
            port: 8081
            scheme: HTTP
          # we poll on pod startup for the Kubernetes master service and
          # only setup the /readiness HTTP server once that's available.
          initialDelaySeconds: 3
          timeoutSeconds: 5
        args:
        - --domain=cluster.local.
        - --dns-port=10053
        - --config-dir=/kube-dns-config
        - --v=2
        env:
        - name: PROMETHEUS_PORT
          value: "10055"
        ports:
        - containerPort: 10053
          name: dns-local
          protocol: UDP
        - containerPort: 10053
          name: dns-tcp-local
          protocol: TCP
        - containerPort: 10055
          name: metrics
          protocol: TCP
        volumeMounts:
        - name: kube-dns-config
          mountPath: /kube-dns-config
      - name: dnsmasq
        image: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.4
        livenessProbe:
          httpGet:
            path: /healthcheck/dnsmasq
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        args:
        - -v=2
        - -logtostderr
        - -configDir=/etc/k8s/dns/dnsmasq-nanny
        - -restartDnsmasq=true
        - --
        - -k
        - --cache-size=1000
        - --log-facility=-
        - --server=/cluster.local/127.0.0.1#10053
        - --server=/in-addr.arpa/127.0.0.1#10053
        - --server=/ip6.arpa/127.0.0.1#10053
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        # see: https://github.com/kubernetes/kubernetes/issues/29055 for details
        resources:
          requests:
            cpu: 150m
            memory: 20Mi
        volumeMounts:
        - name: kube-dns-config
          mountPath: /etc/k8s/dns/dnsmasq-nanny
      - name: sidecar
        image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.4
        livenessProbe:
          httpGet:
            path: /metrics
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        args:
        - --v=2
        - --logtostderr
        - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,A
        - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,A
        ports:
        - containerPort: 10054
          name: metrics
          protocol: TCP
        resources:
          requests:
            memory: 20Mi
            cpu: 10m
      dnsPolicy: Default  # Don't use cluster DNS.
      serviceAccountName: kube-dns

---
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# Warning: This is a file generated from the base underscore template file: kubedns-svc.yaml.base

apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "KubeDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.3.1.1
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP

NOTE - It’s a liability to do this, rather than maybe submoduling their repo and keeping it in mine or something and making sure it’s up to date with master. As soon as they make any improvments, or even the version of kubernetes changes, depending on how backwards compatible the changes are these files will become unusable, a surprise you don’t want to have when critical infrastructure is down and you need to get it back up quickly.

This configuration creates a bunch of resources that you should read up on and at least gain a shallow understanding of (of course some things you’ll want a deeper understanding): Services, Service Accounts, ConfigMaps, and Deployments concepts are all used in this one file.

Step 2: Applying the configuration

Here’s what it looked like when I first tried to apply the configuration:

$ kubectl create -f kubernetes/addons/dns.yaml
replicationcontroller "kube-dns-v20" created
The Service "kube-dns" is invalid: spec.clusterIP: Invalid value: "${DNS_SERVICE_IP}": must be empty, 'None', or a valid IP addres

The issue here was that I left the DNS_SERVICE_IP placeholder in the resource config. This is the location that flanneld is using to direct how DNS names get put out on the network. If you check in the configuration above (it’s from the future, those settings are from after I fixed this issue), you’ll see it’s 10.3.1.1 matching the cluster_dns setting in Part 2. I was initially confused by the DNS_SERVICE_IP placeholder at first, but it’s exactly what it sounds like – it tells the service where to find the thing that manages DNS for your cluster (an IP that you decided).

After fixing this and re-running here’s what I got:

$ kubectl create -f kubernetes/addons/dns.yaml
service "kube-dns" created
Error from server (AlreadyExists): error when creating "kubernetes/addons/dns.yaml": replicationcontrollers "kube-dns-v20" already exists

As you can see, the replication controller (roughly, a Deployment is a Pod + ReplicaSet/ReplicationController) was already created, so it was skipped and an error was printed out. This is a tiny detail, but this kind of UI polish is really nice to see – there are a few ways they could have handled this, and printing such a clear concise message is nice.

Step 3: Verifying the results

After the configuration was seemingly applied without a hitch, my immediate next thought was that everything was too easy. Time to verify that things are actually working correctly:

First command I ran was kubectl get pods (this is going to become burned into your finger muscle memory after just a few hours/days of working with Kubernetes), but I was surprised when it came up empty. One important thing to note about Kubernetes is that it very much uses and relies on namespacing - since I didn’t specify a namespace in the get pods command, “default” was assumed, but the pod I wanted to see was in the “kube-system” namespace. There are a few ways I could have specified the namespace: kubectl get pods -n kube-system and kubectl get pods --namespace=kube-system come to mind as what I used a lot earlier on.

As the guide said to, I used the kubectl get pods --namespace=kube-system | grep kube-dns-v20 command and got this:

$ kubectl get pods --namespace=kube-system | grep kube-dns-v20
kube-dns-v20-g8cxg   0/3       Pending   0          6m

Looks like good news and bad news, the pod is there (yay) but the pod is Pending (boo). At first I thought, “maybe it’s fine, maybe it’s just starting up” – this actually turns out to be wrong, but I didn’t know it yet – the kubernetes node (my only one, the master) wasn’t properly configured, and I just didn’t know, so I thought it was OK that it was in Pending.

I’m going to sacrifice the usefulness of this post as a refernce a bit to stay true to the order in which I actually did things, so let’s jump into installing the Kubernetes Dashboard with the knowledge we have up until this point, pretending that we succeeded at DNS, even though we know we didn’t (with 20/20 hindsight).

Installing the Kubernetes Dashboard

The Kubernetes Dashboard has a little less going on (it doesn’t have to be a DNS nameserver for the cluster, for example), so I expected it to be even easier to set up than the DNS service.

Step 0: Find/build the Kubernetes dashboard resource configuration

This was pretty straight forward compared to the DNS, it’s pretty well documented in the kubernetes official documentation (which has actually changed a bit since I’ve last used it, I assume quality didn’t drop though). The documentation makes it pretty clear where to find the resource configuration:

kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml

I preferred to download it locally just to take a look at it, and here’s what mine looks like (currently in production):

# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# Configuration to deploy release version of the Dashboard UI compatible with
# Kubernetes 1.6 (RBAC enabled).
#
# Example usage: kubectl create -f <this_file>

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
      - name: kubernetes-dashboard
        image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.3
        ports:
        - containerPort: 9090
          protocol: TCP
        args:
          # Uncomment the following line to manually specify Kubernetes API server Host
          # If not specified, Dashboard will attempt to auto discover the API server and connect
          # to it. Uncomment only if the default does not work.
          #- --apiserver-host=http://<machine ip>:8080
        livenessProbe:
          httpGet:
            path: /
            port: 9090
          initialDelaySeconds: 30
          timeoutSeconds: 30
      serviceAccountName: kubernetes-dashboard
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  ports:
  - port: 80
    targetPort: 9090
  selector:
    k8s-app: kubernetes-dashboard

I checked the version that was available (at the github raw link above) at the time of the writing of this blog post and it looks to be just about exactly the same – I don’t think much has changed.

Step 1: Create/Apply the resource configuration to the cluster

The next step is to run this configuration against the cluster and see what happens! Dashboard is also a system-level kubernetes resource so you’re going to have to run kubectl get pods --namespace=kube-system to see it.

After seeing that the pod was present, but in the Pending state, I ran kubectl port-forward kubernetes-dashboard-v1.6.0-SOME-ID 9090 --namespace=kube-system to see if I could port-forward to it (after figuring out that the dashboard is supposed to run on port 9090). I got an error that stated that the port couldn’t be forwarded because the pod was in the Pending state. This is when I realized that both the DNS pod I started earlier and the dashboard pod were broken – being stuck pending is clearly an issue (obviously). When I first set up the DNS service, I thought that maybe it being pending was not an issue, maybe it would stay pending until a request came in or the first pod came up or something like that, but clearly that’s not the case.

Step 2: Debugging: Figuring out why the pod is stuck in the ‘Pending’ state

I couldn’t think of any reason that my pods would be stuck Pending, so I started working backwards and trying to find the gap in my understanding. Right there in the documentation, a debugging section had just the information I needed regarding pods that stayed pending. This is what using good/updated documentation looks/feels like, and I was very happy to find the section at the time, another +1 for Kubernetes in my book.

At this point, I’d relatively rarely used kubelet describe pods <pod-name> to inspect my pods and do sanity checks, so being re-introduced to that as a debugging tool did wonders for my progress. This is a big reason why I consider Kubernetes so consistent and great to use – the introspection tools are fantastic, almost always presenting you with enough information to solve your problem. Don’t forget to include --namespace (or short form -n) when you use kubectl describe <resource> <resource-name> commands, when you’re not dealing with resources not in the “default” namespace.

Step 2.1: Rediscovering and using kubectl describe

Here’s what the end of the output for the describe on the pod looked like:

--- lots of output, redacted here ---
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: <none>
Events:
  FirstSeen  LastSeen Count From   SubObjectPath  Type  Reason   Message
  ---------  -------- ----- ----   -------------  -------- ------   -------
  14m  56s   50 default-scheduler   Warning  FailedScheduling no nodes available to schedule pods

Kubernetes keeps a log of events that happen to a specific resource, and that’s very very good. It helped me to see that the issue here was that Kubernetes was convinced that there are no nodes to schedule pods on.

You likely won’t run into this problem if you use resource configurations posted on this blog (they’re from the future, where everything is working and the infra is stable), this is where I discovered the deprecation of --register-schedulable and the change to --register-node in the kubelet service configuration. There were a few different sources that gave me this knowledge, but there was a bit of seemingly wrong information as well I had to sort through:

These resources helped, but didn’t ultimately solve the problem, I still couldn’t get the dashboard running correctly, and got the same error (even after running the kubectl taint command). Next, I tried to run kubectl taint nodes --all node-role.kubernetes.io/master- --namespace=kube-system, thinking maybe just the namespace was missing, and that didn’t work either. It turns out the problem is that the master node hasn’t registered itself in the API at all. It makes perfect sense that the ’taint’ commands were failing, as you can’t taint a node that isn’t there!

I was a little worried that it was a chicken and egg problem: how does the master node register itself on the Kubernetes API service that it’s about to start, then the node is up before the API service? In the regular multi-node flow for Kubernetes, it would make sense because the master node just has to come up before worker nodes, but I was a little worried how this would work, when the master node WAS the only worker node. To at least confirm that the issue was that Kubernetes didn’t know about the only node (the master node), I ran kubectl get nodes (no --namespace needed), and observed that lack of output. This command should have produced one line, representing the one node (the master node) in the system.

Step 2.2: Making sure the master node could receive work

Thus began the quest to figure out how to properly register the master node as a worker node (my only worker node, in this case). After some searching it became clear that both --api-servers and --register-schedulable are deprecated as kubelet configuration, it looks like the proper way to configure the master as a worker is the following:

  1. Add the following option-setting lines to the kubelet.service unit file:
--require-kubeconfig \
--kubeconfig=/etc/kubernetes/kubeconfig.yaml \
  1. Add a kubeconfig.yaml similar to the one in Kubernetes github issue #36745 that looks roughly like this:
apiVersion: v1
clusters:
- cluster:
    server: http://127.0.0.1:8080
  name: local
contexts:
- context:
    cluster: local
    user: ""
  name: local
current-context: local
kind: Config
preferences: {}
users: []
  1. Restart the kubelet servers, and after a few seconds of seeing errors telling you kubelet can’t connect to the API server (which make sense because it’s not up yet), everything should quiet down (once the API server goes up, started by the node itself).

To check that the node was properly being registered, I ran kubectl get nodes to ensure that the node was listed (note --namespace was not needed):

$ kubectl get nodes
NAME           STATUS    AGE       VERSION
<ip address>   Ready     2m        v1.7.2+coreos.0

Now that there is a node to schedule pods on, the reason dashboard failed to start should no longer be an issue. This is one of the things I mean when I say that Kubernetes is very internally-consistent – it seems obvious at this point that the errors I occurred happened, even when things go wrong they seem to be “under control” in a sense. Going from a broken to properly configured kubelet configuration did, however, mean doing a few things to restart the pieces that were already deployed when things were improperly configured. kube-dns (the DNS addon) seemed to be running properly now, but the dashboard service was still broken. I deleted the pod that was running the dashboard service, thinking I would restart the dashboard service but was surprised to see it come back almost immediately with a different name but exactly the same configuration – which is exactly how it’s supposed to work, since it’s got a ReplicaSet behind it (as part of the Deployment). Re-applying (I also deleted for good measure) the resource configuration for the dashboard service was sufficient to restart for the dashboard service, updating it’s configuration – I also had to go in and clear out some dangling dashboard-related resources (services, replica controllers, etc) that were left about from experimentation. Note that you can use ‘kubectl delete -f’ to delete multiple resources at the same time, no need to do what I did and go through lots of kubectl commands to delete the resources one by one.

Step 2.3: Getting the dashboard service to try again

After restarting the dashboard service, here’s what kubectl describe output looked like (particularly the events section):

Events:
FirstSeen  LastSeen Count From                    SubObjectPath                         Type   Reason                 Message
---------  -------- ----- ----                    -------------                         ------ ------                 -------
1m         1m       1     default-scheduler                                             Normal  Scheduled             Successfully assigned kubernetes-dashboard-3313488171-3grv9 to <machine ip>
1m         1m       1     kubelet, <machine ip>                                         Normal  SuccessfulMountVolume MountVolume.SetUp succeeded for volume "kubernetes-dashboard-token-p919d"
1m         8s       4     kubelet, <machine ip>   spec.containers{kubernetes-dashboard} Normal  Pulled                Container image "gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.3" already present on machine
1m         8s       4     kubelet, <machine ip>   spec.containers{kubernetes-dashboard} Normal  Created               Created container
1m         8s       4     kubelet, <machine ip>   spec.containers{kubernetes-dashboard} Normal  Started               Started container
1m         3s       8     kubelet, <machine ip>   spec.containers{kubernetes-dashboard} Warning BackOff               Back-off restarting failed container

Perfect, this order of events make much more sense! The scheduler was able to find the machine to assign the dashboard to, and it started pulling everything and starting the dashboard. Now I can check on the logs of the actual pod itself, using kubectl logs <pod-name>:

Using HTTP port: 8443
Using in-cluster config to connect to apiserver
Using service account token for csrf signing
No request provided. Skipping authorization header
Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.100.0.1:443/version: x509: certificate is valid for 10.3.0.1, <server ip>, not 10.100.0.1
Refer to the troubleshooting guide for more information: https://github.com/kubernetes/dashboard/blob/master/docs/user-guide/troubleshooting.md

Step 2.4: Making sure nodes are getting the right IP (fixing a config mismatch from the past)

Good news and bad news - the dashboard is starting to run, and it’s attempting to connect to the API server (which is what it’s supposed to do). The BAD news is that when I created the API server TLS certificates, I an IP address (10.3.0.1) inconsistent with some later configuration (10.100.0.1). Very much glad this error was able to alert me to that, as I’m sure it would have caused troubles down the road. The obvious fix was to update everything to use 10.3.0.1 and stop using 10.100.0.1 completely, or to regenerate the TLS certs. I went with updating everyting to use 10.3.0.1 instead. Changing the IP used for the nodes meant updating the kubelet systemd servicefile and the manifest for the API server, @ /etc/kubernetes/manifests/kube-apiserver.yaml (for me), and double checking flanneld configuration.

After updating the IP range used for nodes to 10.3.0.1, the changes didn’t seem to take, the dashboard was still trying to GET 10.100.0.1 in order to access the API server. At this point I was really frustrated so I took a break (which means I went to sleep as it was likely very early in the morning by that point). After waking up, I reconsidered whether it was easiest for me to change how the nodes were being assigned IPs, or regenerate the TLS certificates. Thinking about it for a few minutes it really felt that I should be AT LEAST comfortable enough with Kubernetes to change how the nodes were bieng assigned IP addresses, so I plowed on, basically resolving myself to deepen my understanding of kubernetes to the point where I can confidently change a setting like this.

Despite the brave decision to plow on until I understood, I was thoroughly stumped as to why the dashboard was still looking for 10.100.0.1, so I guessed that the issue was that the dashboard resources (service, pods, etc) was old/stale, and hadn’t properly restarted (remember, it’s not sufficient to just restart a pod, you need to restart/re-apply the whole deployment for configuration changes). I also tried restarting the entire server (in effect restarting flanneld, docker, and kubelet along the way). The dashboard was STILL trying to connect to 10.100.0.1. If after restarting this many things, the pod was still pulling 10.100.0.1 from somewhere, that means it HAD to be somewhere, so I started searching. It turns out the kubernetes API service ITSELF had the 10.100.0.1 IP. Check it out:

$ kubectl get services
NAME         CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   10.100.0.1   <none>        443/TCP   14h

As far as I can explain, the reason this happened is becuase the API server started (14 hours ago), and was active while the OLD configuration (for flanneld and kubelet), so when it started up, it got an IP of 10.100.0.1. Services make your pod accessible to other pods, and the dashboard pod couldn’t properly reach the API server because when it connected to the API server (DNS resolved to 10.100.0.1, which was it’s IP), it realized that the cert the API was using was for 10.3.0.1. Long story short, I needed to restart the service that was exposing the API server, it should have been getting IPs that weren’t 10.100.0.1. Once I did, voila:

$ kubectl delete svc kubernetes
service "kubernetes" deleted
$ kubectl get services
NAME         CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   10.3.0.1     <none>        443/TCP   3s

After fixing this, the logs for the dashboard pod went normal, and I could kubectl proxy and visit localhost:8001/ui to access the dashboard!

Remember that DNS add-on that I thought was working but was stuck in the ‘Pending’ state? It got out of pending thanks to the master-node registration problem getting fixed, and at this point (after getting dashboard working), I went back and did some more checking on it.

Revisiting the DNS addon installation: more thorough verification

While enabling the dashboard addon, given the issues I found, I became skeptical that the DNS service was working properly, even though the DNS service was out of the Pending state now. Time to test it!

The simplest way I could think of to test the DNS service:

  1. Start a pod running a simple nginx image
  2. Start a pod of some minimal OS with curl included
  3. curl into the pod created in step 1 from from the pod created in step 2, using the DNS, not the cluster IP

The steps boiled down to:

  1. kubectl run my-nginx --image=nginx --replicas=2 --port=80 (not really sure why I thought I needed 2 replicas, but whatever)
  2. kubectl run test --image=tutum/curl -- sleep 10000 (container with curl)
  3. kubectl get pods (to get their names)
  4. kubectl exec -it <pod name> -- /bin/bash (to get into the curl container, and start trying to access things by DNS)

At first, when I tried this, it didn’t work – you won’t run into these issues now, if you use the configuration that’s posted here, because it has all the bits, but it was very important that I add the service and the configmap to the configuration for DNS, they were missing when I first wrote the resource configuration. Here’s a little excerpt of a debugging session that was particularly eye opening (after realizing I needed the service, config map, etc):

Debug: The DNS service is runing, but my addresses don’t resolve…

NOTE If you used configuration from these posts, you likely don’t need this section, but it might help others who did things a bit differently, and find themselves with this issue.

To test, I spun up a busybox pod, so I could use nslookup.

$ kubectl exec -ti busybox -- nslookup kubernetes.default
Server:    10.3.0.1
Address 1: 10.3.0.1

nslookup: can't resolve 'kubernetes.default'

Well, there’s the proof it’s not working, nslookup can’t resolve the kubernetes.default address, which according to the DNS documentation, should point to the the API server. According to the output (if I’m reading it right) It tried to connect to a DNS server @ 10.3.0.1 and the resolution failed. Wait a second… what pod is @ 10.3.0.1?

$ kubectl get services --all-namespaces
NAMESPACE     NAME                   CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
default       kubernetes             10.3.0.1       <none>        443/TCP         1h
default       my-nginx               10.3.49.69     <none>        80/TCP          1h
kube-system   kube-dns               10.3.1.1       <none>        53/UDP,53/TCP   35m
kube-system   kubernetes-dashboard   10.3.246.158   <none>        80/TCP          1h

If you can’t see it quite yet the problem is that nslookup is looking at the API server (NOT the kube-dns pod) for DNS service! It’s kube-dns’s job to do nameserver stuff, not the API service. This was hard for me to catch at first, but I let out an audible “d’oh” when I did. I wanted to double check that my understanding was right – if what I was thinking was happening, the nameserver in /etc/resolv.conf would point to 10.3.0.1 (the API server):

$ kubectl exec -ti busybox -- cat /etc/resolv.conf
nameserver 10.3.0.1
search default.svc.cluster.local svc.cluster.local cluster.local your-server.de
options ndots:5

This just about confirmed the issue for me – the nameserver isn’t at 10.3.0.1, it’s at 10.3.1.1! Now, I need to track down where the DNS server is getting set to 10.3.0.1, so I check the manifests, and kubelet.service, and find that the cluster_dns value was what was supplying the 10.3.0.1. Here’s what I did to fix the problem:

  1. Update the cluster_dns setting to 10.3.1.1
  2. sudo systemctl daemon-reload
  3. sudo systemctl restart kubelet
  4. Restart the busybox deployment
  5. kubectl exec -ti busybox -- nslookup kubernetes.default

The output of #5 was:

$ kubectl exec -ti busybox -- nslookup kubernetes.default
Server:    10.3.1.1
Address 1: 10.3.1.1 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes.default
Address 1: 10.3.0.1 kubernetes.default.svc.cluster.local

At this point it looks like it’s working, which is fantastic! Now I can repeat the steps with the curl and nginx containers and make sure the nginx container is accessible by DNS name, and lo and behold, my-nginx:80 (the nginx container) is accessible from inside the curl-only container.

Using ‘kuebctl port-forward’ for just a little more verification

For just a little more verification to make sure things were working right, I used kubectl port-forward from the simple nginx guide to access the my-nginx deployment I made earlier.

It was as simple as:

  1. kubectl port-forward my-nginx-4293833666-nf52h 4000:80
  2. Visit localhost:4000 in my browser

Worked a charm!

Test out exposing NGINX to the whole wide internet

At this point, I’m still very much using the simple nginx guide that I found on the Kubernetes github repository. For me the expose command looked like this:

$ kubectl expose deployment nginx --port=80 --target-port=8000

After running that command, it seemed to work, but using the browser I still couldn’t access that nginx deployment from the world wide web (visiting the IP of the machine in a web browser). nmap helped me verify that the problem was the port not being open at all on the server. My next thought as to what might be wrong was the firewall. VERY VERY VERY unfortunately, CoreOS only comes with iptables, not ufw, so I had to go through the incredibly painful process of setting up iptables rules (some articles on digital ocean helped a ton):

$ sudo iptables -A INPUT -p tcp --dport 80 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
$ sudo iptables -A OUTPUT -p tcp --sport 80 -m conntrack --ctstate ESTABLISHED -j ACCEPT

Running those commands on the server opened up my port 80 for TCP traffic, allowing web requests through to hit the kubernetes cluster, and showing me the NGINX welcome page, great success!

UPDATE Mokhtar Ibrahim from likegeeks.com let me know about a long, in-depth article he wrote about properly understand and use iptables, complete with lots of examples. If you want to understand more about iptables and the digitalocean article wasn’t enough, check it out!

What’s next?

So at this point now, I’ve got:

  • CoreOS running on new dedicated hardware
  • Kubernetes running with remote management through kubectl
  • DNS addon set up on the kubernetes cluster to allow containers to refer to each other by DNS name
  • Dashboard addon set up for the kubernetes cluster, enabling monitor/influence of the cluster from a sit-back-and-click GUI

With this post, the “Fresh dedicated server to single node kubernetes cluster on CoreOS” series is done. We’ve come a long way, but the Kubernetes-related posts are certainly not done – this series laid the groundwork for lots of real work to come:

  1. I have to actualy port app containers that are running on my other infrastructure into Deployment and Service resources (and whatever else)that Kubernetes can deal with.
  2. Figure out how infrastructure pieces like databases are supposed to run on Kubernetes. From my memory of the documentation, this will likely entail:
    • Figuring out if I need Persistent Volume (+/- Claims) to access to a drive on a particular machine consistently (just about every locally-run database will need files at least)
    • Figuring out of I need StatefulSets (formerly known as PetSets)
  3. Start porting utilities (that I didn’t write) like Piwik that I’m running using Docker Compose to Kubernetes resource configs
  4. Figuring out how to make it all repeatable and not go down when the machine restarts
    • Will I need to port systemd unit/service files that I used in older infrastructure to pods? Since everything is being run by kubernetes, how do I make sure those things are running when kubelet starts up?
    • Is there ansible support ansible support for kubernetes? Do I need even needansible if I have kubernetes?
      • I actually answered this question fairly quickly, I do want ansible around if I want to do some deep/dirty stuff on the machine that Kubernetes can’t do, like dropping manifests and setting kubernetes itself up.
      • It looks like ansible is committed to managing kubernetes, and there’s the kubernetes module available.

The coming posts (that cover how I dealt with all these issues) won’t be written as part of this series per-say as I think people can benefit from them even if they don’t go through the same route setting up Kubernetes (for example people who used Tectonic to set up their cluster). Here’s what’s coming up (basically, the notes I have written, but not made into blog posts yet):

  • Serving HTTP applications on a single node bare metal kubernetes cluster
  • Serving a database on a single node bare metal kubernetes cluster
  • Serving HTTPS apps on a single node bare metal kubernetes cluster
  • Setting up SSL certs on Kubernetes
  • Serving Email on a single node bare metal kubernetes cluster
  • Setting up Piwik on Kubernetes, and migrating data from the old instance

Reflections on the steps so far

Overall, I’m still very pleased with Kubernetes – I’ve never found myself overly frustrated with the software, most of the time I’m just frustrated with my own lack of understanding. Kubernetes inspires trust – it works as it says it would, and often goes out of it’s way to help you know what you did wrong (if that’s the case). While the ramp up is indeed steep (there’s a lot to read, a lot of concepts you need to know/understand), it is a good fit for me, who was looking for a more cohesive way to tie together all the ops-related stuff I was doing.

I would have preferred if DNS was bundled and started automatically as a part of kubernetes (I’m pretty sure I just need to put it in /etc/kubernetes/manifests and it will be) as it seems to be a pretty crucial feature for me, however I would totally understand if the decision was made to leave it out due to the fact that it would too eagerly prescribe a container-discovery solution (when people might write others), and add non-crucial baggage to Kubernetes.

I’m excited to put more experience with Kubernetes concepts under my belt, and transfer all my infrastructure over (at this point, I’m running my old infrastructure, and the new coreos server with kubernetes). As I steadily get better at writing resource configurations, and inspecting my way around the cluster, my brain is buzzing with PaaS like ideas that Kubernetes makes so very possible – programmatic control of my cluster is just a HTTP request away (with the right credentials), and new services/projects are only a YAML file and kubectl apply -f away from being available, replicated, and running reliably.