Awesome FOSS Logo
Discover awesome open source software
Launched 🚀🧑‍🚀

Ingress Controller considerations in early 2021

Kubernetes logo + ( Traefik logo || Envoy logo )

DISCLOSURE (03/16/2021)

Sitting around after releasing this post I realized that I accepted becoming a "Traefik Ambassador" due to my previous post about Traefik (the one about adding HTTPS settings easily), and submitted the post to their listing when they asked. Such treatment can certainly bea source of bias so I want to make it clear that this is not a sponsored post in any way (I have not been compensated).

In the future I'll remove myself from the Ambassador program to avoid this conflict of interest. Traefik being portrayed in a positive light by this post is due to the hard work of the Traefik team, not any pay-to-play/BusinessWire-y agreement.

tl;dr - My very late discovery of some changes and new features in Envoy functionality made me consider changing to an Envoy-backed Ingress Controller but I end up sticking with Traefik, since it’s got a great feature set most important to me being UDP support.

While I don’t watch for absolutely every new development in the Kubernetes ecosystem (could any one human even do this without going insane or being insane from the start?), I do keep my ear somewhat close to the ground on changes to Ingress-related components. Up until now I’ve used the Kubernetes-team maintained NGINX controller and eventually I switched to Traefik and have been pretty happy with it since. I never got on the Istio train since it was overly complicated and I much prefer Linkerd 2 for intra-cluster smarts, if I really needed a mesh to begin with. In the meantime, I’ve fully embraced Traefik’s CRD-driven use case and am pretty happy with it (for example see the post where I was able to secure HTTPS very easily), but it never hurts to keep abreast of changes in the landscape.

I am pretty late to it but after coming across the Envoy documentation for the gRPC bridge. I was pretty impressed with the prospect of being able to put that functionality at the mesh level – Envoy has some pretty compelling features so I did some more digging to see if it was worth considering switching to a Envoy-backed Ingress Controller from Traefik.

Before I get into the Envoy-backed ingress controllers I looked at, it might make sense to drop some context on the path I took to get to where I am now.

What I used before and why

  • NGINX: well supported, it’s super reliable, operation is easy to understand – NGINX config files are generated and loaded, you can inspect it “in-vivo” yourself if you kubectl exec into the pod(s)
  • Traefik: Newcomer with some awesome features (annotation then CRD-driven) that integrate easily, added up with UDP support fairly quickly, nice (though somewhat hobbled) plugin ecosystem, nice built-in management UI

From what I can remember I actually switched to Traefik as I was feeling kind of despondent at the idea of having to modify a ConfigMap any time I wanted to add a new UDP service – it felt like it should be possible to add it dynamically, and the ingress-nginx instructions worked great but were a bit tedious. In the end it’s not really possible to get the dynamic support (just yet) but at the very least Ingress Controllers I consider generally have to have TCP/UDP as an option – I only switched to Traefik after they added UDP support.

If I’m going to have a controller handle traffic for my cluster I want it to be able to handle TCP, UDP, and L7 protocols.

Why Envoy is compelling to me

Envoy has had support for UDP for a while now but running into an article about it recently made me think I should do some reconsidering. Like Traefik, Envoy has a similar set of compelling functionality that I want to make use of:

  • Protocol-aware filtering - ex. being able to filter certain redis commands
  • Extensibility - it’s pretty easy to write plugins/filters for Envoy from what I can see
  • Observability - well considered, and it can extend the benefits to your app as well by intercepting logging
  • Bridging - ex. bridging gRPC

Well anyway, it’s interesting. Envoy might be the most compelling gateway + intra-cluster proxy right now but there are a lot of projects jockeying to use it in a Kubernetes cluster (Istio being possibly the most famous one). I wanted to lay out some of my thoughts on the topic, while I was here.

Envoy-based Ingress Controllers

There are at least 4 ways to use Envoy in your cluster as an Ingress Controller:

Unfortunately, none of these seem to support the UDP functionality of Envoy at this point (well at least at the point the notes for this post were written down). Some supporting issues (still open as of 03/16/2021):


Another somewhat “odd duck” entry is that Cilium actually supports Envoy but it doesn’t do it at the Ingress level – it has functionality for using Envoy as the Cilium proxy and extending it which is great but is seems considerably more complicated than “just” managing ingress. It feels like this is more for mesh functionality though it looks like Cilium will use Envoy to receive external traffic, so theoretically it could be changed there too.

This approach seems viable but requires me to pick Cilium as my CNI provider which is too much of a requirement outright for now. I’m actually leaning towards switching to Cilium but don’t want to marry the two concepts (Ingress Controller + CNI layer) together quite this way… AFAIK Cilium is foucsed on managing network policy and CNI-standardized intra-pod communication, not ingress from the outside world.

Envoy Operator?

Theoretically I could use Envoy Operator, but I’d need to bring along the Ingress Controller bits – listen to and manage Ingress resources or some other resource/CRD, and deploying/reconfiguring Envoy instances accordingly. The Envoy Operator also doesn’t seem to currently support injecting Envoy proxies as sidecars, which seems like a bit of a liability:

The Envoy Operator currently supports deploying proxies as standalone pods, but will soon support injecting Envoy proxies as sidecar containers into existing pods to serve as transparent proxies for use in a service mesh such as Istio.

So a bit of a dead end there too.

Bonus: A new challenger, GoBetween?

GoBetween is an entrant that I really like which I don’t think many people have heard of. It’s got some compelling features:

  • Go-based, very simple to run
  • Fantastic documentation
  • Relatively simple, doesn’t try to do too much
  • DNS SRV based load balancing
  • Docker/Swarm support
  • PROXY protocol support
  • SNI support

GoBetween also beats HAProxy quite handily in their testing, but it doesn’t support being a Kubernetes Ingress Controller.


Obviously there’s a lot of really nice shiny useful functionality in Envoy (just like Traefik), and I really want to take advantage of it, but support for UDP is a huge thing for me, even if I’m not making extensive use of it anymore. Moving to a controller that doesn’t have UDP support seems like a step backwards, and SMTP servers are a workload I have run in the past and definitely want to run again. I think it’s worth it to be very concerned with capabilities of a system I’m adopting, I don’t want to choose between two relatively similar projects and end up with the one which has less functionality.

So it looks like I’m sticking with Traefik for now. I’m a bit unhappy with their stance on private plugins, mostly because I just don’t want to sign up for whatever their Pilot program is – I do wish them commercial success, but I may have to go back to trusty NGINX if there’s too much feature cannibalism. Right now IMO Traefik is quite possibly the best Ingress Controller on offer – TCP, UDP, SNI support, integration with Let’s Encrypt (I use Cert Manager because I like to keep ingress and certs separate).