Awesome FOSS Logo
Discover awesome open source software
Launched 🚀🧑‍🚀

Fresh Dedicated Server to Single Node Kubernetes cluster on CoreOS, Part 1

Categories

This is the first of a series of blog posts centered around my explorations and experiments with using Kubernetes to power my own small slice of infrastructure (1-3 servers, mix of VPS and dedicated hardware).

This post is a bit of an introductory piece detailing some of my motivations and internal dialogue on switching to kubernetes and upgrading my infrastructure at all. This post will unfortunately be light in “how-to” and will mostly cover “why-I-chose-to”.

If you’re wondering what came before, check out my previous post on my ansible + systemd + docker setup. While these posts were/are published fairly close together, note that I typed these up after the fact – the raw notes were taken at the time, and I’ve only more recently found time to go through and refine them into blog posts. It’s actually been quite a few months since I converted to the ansible setup, the blog posts are just super late in finally hitting the internet.

Why change my infrastructure at all?

The impetus for starting down the path to Kubernetes was discovering the ease and power of dedicated servers. Up until now, I’ve only been using managed instances, VPSes and EC2 machines to run my personal projects, and thought that dedicated hardware required trips to a data center to install and weren’t easy to manage. As I ran more and more projects on underpowered VPSes, I inevitably wanted more and more power/control, eventually switching to using just KVM VPSes for everything. Recently I was introduced to dedicated servers through a comment on Hacker News (one of the only places I go anymore for tech/programming related news/articles), and once I realized how easy it was to get a powerful machine for a little bit more money, I was instantly hooked. For me, the biggest reason of moving to Kubernetes was a fresh start on a new, beefier server.

With this move to a beefier server, I realized that I would of course have to move a bunch of applications over, and I wanted to see if there was a good way to increase the automatability of running apps on the one big machine. Basically I wanted a system I could just push a container to and have something running in prod. There are tools that do something close like Dokku and Flynn, but I didn’t want to even push code over – just containers. Containers greatly simplified my deployment flow, and I think they’re the future (and present, where applicable) of deployment – long gone are the days where I have to manage some single process running on a machine, making sure a bunch of random things are installed on the file system in a very specific place for everything to run properly.

My first foray into dedicated servers happened through using Hetzner online server auction. It was amazing to see large very capable server with great specs available for good prices, basically what I was already paying for multiple VPSes. I picked a server with specs that I liked, for around $40 a month, and after roughly 5 hours (it might have been even quicker), it was installed and ready for me to SSH into (with ssh-key based login already pre-configured which was also great).

Choosing CoreOS

After hearing a bit about CoreOS, and knowing how much use of docker had simplified my life, I thought this was also a good time to get into running a container-centric OS like CoreOS. While I don’t know the official definition of “Container Linux”, I generally explain it as linux distribution(s) that are very focused on enabling a container-centric workflow – almost shifting the burden of running long-running programs to whatever container daemon is being used (docker/rkt/etc). To me, this boils down to CoreOS basically being a linux distribution that has a LOT of thing stripped from it (a package manager for example), but with the expectation that administrators don’t need to do much with the server other than run containers. As I’ve alluded to before, I’m already on the container hype train, and enjoying the simplicity of my deployments with them, so this move is welcome to me – on my previous machines I do little more than making sure that nginx and a bunch of docker containers are running.

Another interesting choice is RancherOS, which takes the idea even further in that many system-level services are dockerized! That was a little extreme for me so I stuck with CoreOS which is a little less extreme for my first foray at least.

Installing CoreOS

Step 0: Find the right guide

I started by reading the majority of the Container Linux Documentation. It was a little confusing at times because much of the directions and information is written with cloud providers in mind, but once I got used to calling a server I SSH into “bare metal”, I could find the documentation that was relative to me and gave me an idea of what I was looking into.

I spent a bunch of time reading the guide on installing container linux to disk, trying to understand the process and the moving bits/pieces in my head before getting on the server and trying things out.

Step 1: Reading more documentation and guides

The server came with what they (everyone?) called the rescue OS, so this was a debian-based OS installed after the machine was wiped after the previous user was done with it (the machines are recycled). Ironically, I was more comfortable in the rescue OS than CoreOS after it was fully installed due to the rescue OS having utilities like apt-get, wget, etc that I could comfortably fall back on.

Once I was able to access the admin interface for my server, and SSH in the first thing was to figure out just how to get started, I find a gist on github that looks extremely useful.

Unfortunately, Cloud config is the OLD/deprecated way to provision with CoreOS/Container Linux – the new standard is Ignition, so the guide that I found in that gist was at least partially outdated. This meant for me going back and reading the documentation some more, trying to figure out the differences between cloud-config and ignition and how to use the latter.

The documentation is a bit spread out and sometimes a bit hard to find (though there’s a lot there, which I’m definitely grateful for), so here are some links that helped me put things together:

  • Ignition config examples (I kind of wanted to see the second-to-end product to get a feel for the ergonomics of Ignition config)
  • Config transpiler overview (transpiling kind of brought up a little PTSD/fear of complexity but it’s actually not bad, in this case)
  • The ignition getting started docs (one of the big questions I had here is how the ignition configuration file even got on tha machine that was starting up – I panicked a little bit thinking that it required another server to FTP from or whatever but there are lots of reasonable ways to get the file read)
  • What is ignition
  • Installing to disk documentation (this is where I realize that everyone’s calling what I’m doing a “bare metal” installation). When I hear “bare metal” I think of a machine that doesn’t even have an OS or something, like programming something that will run on a Hypervisor directly. The important bit was:

Bare Metal - Use the coreos.config.url kernel parameter to provide a URL to the configuration. The URL can use the http:// or tftp:// schemes to specify a remote config or the oem:// scheme to specify a local config, rooted in /usr/share/oem.

With this stuff read, I was pretty comfortable that I knew at least some of the moving pieces and what I was getting myself into. Next, I tried to set up RAID1, which though easy to setup is not actually well-supported by CoreOS just yet. I’ll spare you the details, but suffice it to say, after finding a great guide to setting up RAID1 and setting it up, I basically had to tear it down immediately after. No redundancy for me I guess :(

Step 2: Setting up CoreOS

the install CoreOS to disk guide was a real life-saver here, but the steps are actually pretty simple:

  1. Download the coreos-install script
  2. Make it executable (chmod +x coreos-install)
  3. Download ct (config transpiler) from the releases page which turns your human-readable YAML-based CoreOS ignition configuration (that’s a mouthful) into JSON ignition config which the install script will actually read/use.
  4. Verify the binary….? It’s kind of a bad idea to just download binaries off the internet and run them
  5. Rename the binary and stick it in /usr/bin for now or somewhere else on your PATH
  6. Start creating the YAML file container linux config (AKA Ignition config?) that you’ll be feeding to CT
  7. Convert the ignition config to JSON that can actually be used during container linux setup (either ct -in-file container-linux-config.yml -out-file coreos-ignition-config.json or if you like your UNIX-y feelsct path-to-your-container-linux-config.yaml | ct > coreos-ignition-config.json)
  8. Run the CoreOS installer (this is the bit where I found out that RAID1 support is kind of not a thing yet, trying to run the installer on a raid device won’t work, there’s even a hardcoded check)

For me, the output looked like this:

root@rescue ~ # coreos-install -d /dev/sda -C stable -i ./coreos-ignition-config.json
Current version of CoreOS Container Linux stable is 1409.7.0
Downloading the signature for https://stable.release.core-os.net/amd64-usr/1409.7.0/coreos_production_image.bin.bz2...
2017-08-06 14:37:00 URL:https://stable.release.core-os.net/amd64-usr/1409.7.0/coreos_production_image.bin.bz2.sig [543/543] -> "/tmp/coreos-install.mKyJwUYtgF/coreos_production_image.bin.bz2.sig" [1]
Downloading, writing and verifying coreos_production_image.bin.bz2...
2017-08-06 14:37:35 URL:https://stable.release.core-os.net/amd64-usr/1409.7.0/coreos_production_image.bin.bz2 [288011317/288011317] -> "-" [1]
gpg: Signature made Wed 19 Jul 2017 02:12:04 AM CEST using RSA key ID 1CB5FA26
gpg: key 93D2DCB4 marked as ultimately trusted
gpg: checking the trustdb
gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
gpg: depth: 0  valid:   1  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 1u
gpg: Good signature from "CoreOS Buildbot (Offical Builds) <buildbot@coreos.com>"
Installing Ignition config ./coreos-ignition-config.json...
Success! CoreOS Container Linux stable 1409.7.0 is installed on /dev/sda

The process was so easy that I didn’t even have a chance to make a fairly complicated/time-saving Ignition config – I wrote a minimal one with just one user core and an SSH key, and it just worked. All in all a very pain-free process. After this I rebooted and the prompt looked like:

Container Linux by CoreOS stable (1409.7.0)
core@localhost ~ $

Nice and easy! If things keep going like this I’ll be very happy. Unfortunately for me, I now went through a few manual steps to set up some other things that could/should have been done in the ignition config, but since I’m not running in a cloud environment, and I don’t want to wipe my server back to the rescue state just to test a better config, I endured the dirty feeling of doing manual hard-to-automatically-reproduce stuff on the server.

Now that we’ve got a pretty minimally configured but working CoreOS system up and running, the next thing is to install Kubernetes, so stay tuned – there’s A LOT more to come (where there were roughly 3 steps to setting up CoreOS, I went through 9 steps and a lot of wandering in the dark to set up Kubernetes).