tl;dr - I upgraded my small single-node cluster (6C/12T 16GB Ubuntu 18.10 Hetzner dedicated server) to Kubernetes 1.15 from 1.13. See the TLDR summary section section for the list of steps (basically, use kubeadm
).
I run a tiny single-node kubernetes cluster which hosts this site, client demos, experiments, and many of my own projects – I’ve written about it lots over the years, and today I thought I’d cover one of the mostly mundane (for my setup) parts of running Kubernetes – upgrading to a newer version, 1.15. kubeadm
makes the process really simple, but I’m going to take the 1.13
->1.14
->1.15
route to get there.
Obviously at this point, you should have a pretty good idea what Kubernetes is and how it runs – the critical pieces of infrastructure that run on the master node and/or worker nodes. Some good resources:
kublet
official documentationkubeadm
This would be the right place to start taking some backups and inventory of what you know about the current setup. I manage my machines with ansible
(others are not running k8s) so this is a good point to check my scripts and make sure they’re still up to date/any previous changes I was making are settled. As far as kubernetes goes, we can find out the version of the client & server with kubectl version
:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T21:04:45Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T20:56:12Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
A few things are old there – go
is on 1.12
, kubernetes is on 1.15
– for now I’ll focus on moving only kubernetes.
As far as making a backup there are a few excellent resources out there on how to make a backup of a k8s cluster so I’ll leave that as an excersize for the reader. The important bits are:
/etc/kubernetes
(if you used manifests in your install)etcd
stateA cluster basically is the sum total of the resources running on it, and the ephemeral state stored in etcd
, assuming the base system is configured similarly (for things like mounted harddrives, etc). Making the creation of these things/the cluster configuration in an -infra
repository is how I do this without many worries. I can go from completely new server to all services installed with one command.
Assuming you have a backup, let’s get a feel for where kubeadm
is:
$ kubeadm version
kubeadm version: &version.Info{Major: "1", Minor: "13", GitVersion: "v1.13.0", GitCommit: "ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState: "clean", BuildDate: "2018-12-03T21:02:01Z", GoVersion: "go1.11.2", Compiler: "gc", Platform: "linux/amd64"}
kubeadm
kubeadm
via apt
(don’t do this)WARNING DO NOT apt-get install kubeadm
like I’m about to do here – apt
will just replace your /usr/bin/kubelet
and re-install without any regard for your cluster status/health. It’s super fucking dangerous, and I don’t know why it’s the default when simply downloading a new kubeadm
version to pull along kubelet
.
Let’s update to a newer version of kubeadm
before we get started, we want to get any goodies that might have been fixed in later versions:
$ sudo apt-get install kubeadm
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
conntrack kubelet kubernetes-cni
The following NEW packages will be installed:
conntrack
The following packages will be upgraded:
kubeadm kubelet kubernetes-cni
3 upgraded, 1 newly installed, 0 to remove and 83 not upgraded.
Need to get 35.0 MB of archives.
After this operation, 14.8 MB of additional disk space will be used.
Do you want to continue? [Y/n] Y
So a few more things came in than we expected there, and kubelet
along with kubernetes-cni
which are new packages were also updated. Afterwards the version is:
$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:37:41Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
kubeadm upgrade
kubeadm
happens to have an extremely useful upgrade
mechanism – let’s see how it works:
$ kubeadm upgrade --help
Upgrade your cluster smoothly to a newer version with this command
Usage:
kubeadm upgrade [flags]
kubeadm upgrade [command]
Available Commands:
apply Upgrade your Kubernetes cluster to the specified version
diff Show what differences would be applied to existing static pod manifests. See also: kubeadm upgrade apply --dry-run
node Upgrade commands for a node in the cluster
plan Check which versions are available to upgrade to and validate whether your current cluster is upgradeable. To skip the internet check, pass in the optional [version] parameter
Flags:
-h, --help help for upgrade
Global Flags:
--log-file string If non-empty, use this log file
--log-file-max-size uint Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
--rootfs string [EXPERIMENTAL] The path to the 'real' host root filesystem.
--skip-headers If true, avoid header prefixes in the log messages
--skip-log-headers If true, avoid headers when opening log files
-v, --v Level number for the log level verbosity
Use "kubeadm upgrade [command] --help" for more information about a command.
A good idea is probably to run the plan
subcommand and see what our options are:
$ kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.13.0
[upgrade/versions] kubeadm version: v1.15.0
[upgrade/versions] Latest stable version: v1.15.0
[upgrade/versions] Latest version in the v1.13 series: v1.13.8
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT AVAILABLE
Kubelet 1 x v1.15.0 v1.13.8
Upgrade to the latest version in the v1.13 series:
COMPONENT CURRENT AVAILABLE
API Server v1.13.0 v1.13.8
Controller Manager v1.13.0 v1.13.8
Scheduler v1.13.0 v1.13.8
Kube Proxy v1.13.0 v1.13.8
CoreDNS 1.2.6 1.3.1
Etcd 3.2.24 3.2.24
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.13.8
_____________________________________________________________________
Upgrade to the latest stable version:
COMPONENT CURRENT AVAILABLE
API Server v1.13.0 v1.15.0
Controller Manager v1.13.0 v1.15.0
Scheduler v1.13.0 v1.15.0
Kube Proxy v1.13.0 v1.15.0
CoreDNS 1.2.6 1.3.1
Etcd 3.2.24 3.3.10
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.15.0
_____________________________________________________________________
Such nice useful output! I’ve got at least two options – update to v1.13.8
or v1.15.0
. In the case that I upgrade to v1.13.8
it looks like I’m going to have to upgrade the kubelet
manually. The apt-get install
that triggered the update of kubelet
turned the local kubelet
into a v1.15.0
version (right now the old v1.13.0
version is running, assuming the systemd service has not restarted). We can confirm this with kubelet --version
:
$ which kubelet
/usr/bin/kubelet
$ kubelet --version
Kubernetes v1.15.0
Since I declared earlier I was going to be a bit more careful, let’s do this – we know that /usr/bin/kubelet
is currently the newer version, let’s uninstall the kubelet
package (which introduced the v1.15.0
binary) and get the v1.13.0
binary back:
$ sudo cp /usr/bin/kubelet /usr/bin/kubelet-1.15.0.bak
$ sudo apt-get remove kubelet
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages were automatically installed and are no longer required:
conntrack cri-tools ebtables kubernetes-cni socat
Use 'sudo apt autoremove' to remove them.
The following packages will be REMOVED:
kubeadm kubelet
0 upgraded, 0 newly installed, 2 to remove and 83 not upgraded.
After this operation, 160 MB disk space will be freed.
Do you want to continue? [Y/n] n
Abort.
Yikes! Well that’s not what we want at all – I don’t want to remove kubeadm
! Rather than try and investigate wheter this pairing of the packages is absolutely necessary (I could just try and force apt
to only remove one package), since I know that kubelet
is literally just a binary (thanks Go!) that will run, I’m just going to pull the kubelet
v1.13.0
server binaries from the release docs and install it to /usr/bin
myself:
$ echo "a8e3d457e5bcc1c09eeb66111e8dd049d6ba048c3c0fa90a61814291afdcde93f1c6dbb07beef090d1d8a9958402ff843e9af23ae9f069c17c0a7c6ce4034686 kubernetes-server-linux-amd64.tar.gz" > kubernetes-server-linux-amd64.tar.gz.sha512
$ curl -LO https://dl.k8s.io/v1.13.0/kubernetes-server-linux-amd64.tar.gz # download the server binaries
$ sha512sum -c kubernetes-server-linux-amd64.tar.gz.sha512 # check the SHA sum
kubernetes-server-linux-amd64.tar.gz: OK
OK now we’ve got some semblance of assurance that the download we got was correct, let’s unzip:
$ tar -xvf kubernetes-server-linux-amd64.tar.gz
kubernetes/
kubernetes/server/
kubernetes/server/bin/
kubernetes/server/bin/kube-scheduler.docker_tag
kubernetes/server/bin/kube-controller-manager.docker_tag
kubernetes/server/bin/kube-apiserver.docker_tag
kubernetes/server/bin/kube-scheduler.tar
kubernetes/server/bin/kubelet
kubernetes/server/bin/hyperkube
kubernetes/server/bin/mounter
kubernetes/server/bin/kube-apiserver.tar
kubernetes/server/bin/kube-scheduler
kubernetes/server/bin/cloud-controller-manager.docker_tag
kubernetes/server/bin/kube-controller-manager.tar
kubernetes/server/bin/cloud-controller-manager.tar
kubernetes/server/bin/kubectl
kubernetes/server/bin/kube-apiserver
kubernetes/server/bin/kube-proxy.docker_tag
kubernetes/server/bin/apiextensions-apiserver
kubernetes/server/bin/kube-proxy.tar
kubernetes/server/bin/kube-proxy
kubernetes/server/bin/kubeadm
kubernetes/server/bin/cloud-controller-manager
kubernetes/server/bin/kube-controller-manager
kubernetes/LICENSES
kubernetes/kubernetes-src.tar.gz
kubernetes/addons/
And we’ll copy the kubelet
binary right into /usr/bin
:
$ cp kubernetes/server/bin/kubelet /usr/bin/kubelet
THIS is the moment I realized how big of a mistake using apt
was – the earlier apt
installation had near instantaneously replaced kubelet
and re-installed the binary itself, as well as restarting the systemd service. The earlier step has been updated to reflect this error, but while it was actually happening I needed to get back to the original state (v1.13.0
with v1.13.0
binaries all over), which meant swapping out /usr/bin/kubelet
back to the v1.13.0
version. After doing this, I get the right version coming back from kubectl get nodes
:
$ k get nodes
NAME STATUS ROLES AGE VERSION
ubuntu-1810-cosmic-64-minimal Ready master 217d v1.13.0
kubeadm upgrade
Now that we’ve fixed the small snafu with the over-eager updating from apt
, let’s do a kubeadm
-guided upgrade to get to v1.13.8
:
$ kubeadm upgrade plan v1.13.8
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.13.0
[upgrade/versions] kubeadm version: v1.15.0
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT AVAILABLE
Kubelet 1 x v1.13.0 v1.13.8
Upgrade to the latest version in the v1.13 series:
COMPONENT CURRENT AVAILABLE
API Server v1.13.0 v1.13.8
Controller Manager v1.13.0 v1.13.8
Scheduler v1.13.0 v1.13.8
Kube Proxy v1.13.0 v1.13.8
CoreDNS 1.2.6 1.3.1
Etcd 3.2.24 3.2.24
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.13.8
_____________________________________________________________________
OK, looks like we’re going to need to get the kubelet
for v1.13.8
as well! Let’s download that, so we can upgrade control plane. Had to do a little more digging but eventually found the kubernetes announce post for v1.13.8
which had links to the binaries.
$ echo "02f698baeb6071a1b900c88537eef27ed7fda55a59db09148def066ddccec34b74f21831d35e9f1ef8205f6f5f7c51c9337086b6374e9b26b592a0eeeb59cc15 kubernetes-server-linux-amd64.tar.gz" > kubernetes-server-linux-amd64.tar.gz.sha512
$ curl -LO https://dl.k8s.io/v1.13.8/kubernetes-server-linux-amd64.tar.gz
$ sha512sum -c kubernetes-server-linux-amd64.tar.gz.sha512
kubernetes-server-linux-amd64.tar.gz: OK
$ tar -xvf kubernetes-server-linux-amd64.tar.gz
With that, we’re ready to do the manual upgrade after the control plane, so let’s run kubeadm upgrade apply v1.13.8
!
$ kubeadm upgrade apply v1.13.8
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/version] You have chosen to change the cluster version to "v1.13.8"
[upgrade/versions] Cluster version: v1.13.0
[upgrade/versions] kubeadm version: v1.15.0
[upgrade/version] FATAL: the --version argument is invalid due to these errors:
- Kubeadm version v1.15.0 can only be used to upgrade to Kubernetes version 1.15
Well it looks like I made a grave mistake – kubeadm
version 1.15 can only be used to upgrade to kubernetes version 1.15! Looks like I’ll need to replace that binary as well (installing with apt
really was a huge mistake). An apt-get remove
and a cp
later, I’ve got the v1.13.8
version of kubeadm
that I just unpacked available @ /usr/bin/kubeadm
. A kubeadmn upgrade plan
run returns the following:
$ kubeadm upgrade plan
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.13.0
[upgrade/versions] kubeadm version: v1.13.8
I0715 07:53:28.760032 7405 version.go:237] remote version is much newer: v1.15.0; falling back to: stable-1.13
[upgrade/versions] Latest stable version: v1.13.8
[upgrade/versions] Latest version in the v1.13 series: v1.13.8
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT AVAILABLE
Kubelet 1 x v1.13.0 v1.13.8
Upgrade to the latest version in the v1.13 series:
COMPONENT CURRENT AVAILABLE
API Server v1.13.0 v1.13.8
Controller Manager v1.13.0 v1.13.8
Scheduler v1.13.0 v1.13.8
Kube Proxy v1.13.0 v1.13.8
CoreDNS 1.2.6 1.2.6
Etcd 3.2.24 3.2.24
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.13.8
_____________________________________________________________________
Finally, let’s run it:
$ kubeadm upgrade apply v1.13.8
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade/apply] Respecting the --cri-socket flag that is set with higher priority than the config file.
[upgrade/version] You have chosen to change the cluster version to "v1.13.8"
[upgrade/versions] Cluster version: v1.13.0
[upgrade/versions] kubeadm version: v1.13.8
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.13.8"...
Static pod: kube-apiserver-ubuntu-1810-cosmic-64-minimal hash: 8bc3534f1824c7cd4f92a16dcac4f046
Static pod: kube-controller-manager-ubuntu-1810-cosmic-64-minimal hash: 21b83e834c36f219e2e985791255dac5
Static pod: kube-scheduler-ubuntu-1810-cosmic-64-minimal hash: 69aa2b9af9c518ac6265f1e8dce289a0
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests306892373"
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-07-15-07-57-06/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-ubuntu-1810-cosmic-64-minimal hash: 8bc3534f1824c7cd4f92a16dcac4f046
Static pod: kube-apiserver-ubuntu-1810-cosmic-64-minimal hash: 8bc3534f1824c7cd4f92a16dcac4f046
Static pod: kube-apiserver-ubuntu-1810-cosmic-64-minimal hash: 8bc3534f1824c7cd4f92a16dcac4f046
Static pod: kube-apiserver-ubuntu-1810-cosmic-64-minimal hash: c059737981365c7d00a7a60cbaa13e4e
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[apiclient] Found 0 Pods for label selector component=kube-apiserver
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-07-15-07-57-06/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-ubuntu-1810-cosmic-64-minimal hash: 21b83e834c36f219e2e985791255dac5
Static pod: kube-controller-manager-ubuntu-1810-cosmic-64-minimal hash: 95cf5c26557ad5d37e9f12717d15760c
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-07-15-07-57-06/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-ubuntu-1810-cosmic-64-minimal hash: 69aa2b9af9c518ac6265f1e8dce289a0
Static pod: kube-scheduler-ubuntu-1810-cosmic-64-minimal hash: 17f0dc6eab7ce5b8c9735be94ba38fff
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "ubuntu-1810-cosmic-64-minimal" as an annotation
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [ubuntu-1810-cosmic-64-minimal kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 176.9.30.135]
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.13.8". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
The output looks pretty benign! Now let’s update our kubelet
ASAP (NOTE, this causes downtime on the order of seconds):
$ sudo systemctl stop kubelet && sudo cp ./kubernetes/server/bin/kubelet /usr/bin/kubelet && sudo systemctl start kubelet
A quick check of the systemd status of kubelet
looks good, and kubectl get nodes
output also looks good:
$ k get nodes
NAME STATUS ROLES AGE VERSION
ubuntu-1810-cosmic-64-minimal Ready master 217d v1.13.8
This all means nothing, however, if the actual apps I host are down, so I checked some monitoring stuff I have in place – for example I actually run an internal instance of Statping, so checking that as well as visiting the properties I expect to be up was good enough for me (this blog, techjobs.tokyo, etc).
Upgrading from v1.13.8
to v1.14.0
was pretty straight forward, and upgrading to v1.14.0
and v1.15.0
were similar.
While I was there I also updated containerd
to v1.2.7
(from v1.2.1
). All in all, the update process is pretty forgiving, and at this point the ecosystem is pretty mature. After all this updating I have what I was looking for:
$ k get node
NAME STATUS ROLES AGE VERSION
ubuntu-1810-cosmic-64-minimal Ready master 217d v1.15.0
So essentially the steps for a manual upgrade were as follows:
etcd
, other relevant state)kubeadm upgrade plan
to see some preliminay options if you don’t know what version to upgrade to (it should show you a minor and major upgrade if either are possible)https://dl.k8s.io/v1.13.8/kubernetes-server-linux-amd64.tar.gz
) & check the SHA sumkubeadm
and replace it with the newer version from the server binaries (for the version you want to upgrade to)kubeadm upgrade <version>
These steps can be repeated as necessary to get to the final version you want to be at.
Well everything I’ve done here is actually pretty terrible from a reproducability standpoint – manual work that should be automated where possible. I have at least two options I can think of:
ansible
to create a parametrized role that will upgrade between two different k8s versionsThe second approach is the safest and right(tm) solution, kubectl cordon
exists for this purpose, and this is the way GKE reports you should do it. As I’m not as super critical levels I didn’t feel the need to do either of these things – I plan on growing the cluster soon but since I have other things to do today I’m going to take the ~5 seconds of down time imposed by the restart of the kubelet
service.
Well anyway, I hope this was at least an interesting read for those on the fence about kubernetes for local small-scale use – the tooling is pretty fantastic already, and once you’ve paid the up front cost of knowing the pieces that make kubernetes work (kubelet
, etcd
, related manifests), it’s pretty easy to debug in your mind – though I can’t claim it always goes this smoothly. Even if you make mistakes (like swapping out the kubelet
to the wrong version because you thought apt-get install
would be a good idea), the setup can be pretty forgiving once it’s working.