Replies: 1 comment 6 replies
-
|
Oh, this is so funny, I'm actually looking into the same thing right now :D I was thinking about using a Cilium-based solution, but I stumbled upon something that seems promising—Cloudflare tunnels. The idea is to set up a tunnel and run the tunnel client in the cluster to route all traffic to the ingress controller. This way, you don't need a load balancer or firewall rules, since only traffic from Cloudflare gets through. Have you tried this? |
Beta Was this translation helpful? Give feedback.
6 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
First of all thank you for your work on this. I've been having a blast getting into k8s without having to pay mr bezos and friends a gozillion dollars for a cluster.
I've found myself in a tricky situation with IP filters the past few days though and I'm curious what kinds of solutions others have come up with for this. I've got a hetzner load balancer in front of my ingress controller (traefik) and a setup that works for what I want to do. I've enabled the proxy protocol
apiVersion: v1 kind: Service metadata: name: traefik-service namespace: ingress annotations: load-balancer.hetzner.cloud/location: fsn1 load-balancer.hetzner.cloud/name: kube-lb load-balancer.hetzner.cloud/use-private-ip: "true" + load-balancer.hetzner.cloud/uses-proxyprotocol: "true" load-balancer.hetzner.cloud/hostname: xetera.dev load-balancer.hetzner.cloud/http-redirect-https: "false"and traefik can pick it up the real IP for routing purposes just fine:
But here's the problem, Hetzner, as do many other "smaller" cloud providers like DigitalOcean, doesn't allow setting firewall rules on their load balancers. Which in my opinion makes them almost useless for serious production use but that's kind of besides the point and not a problem for my usecase.
To work around this, the first thought that popped in my head was to set up a NetworkPolicy rule and drop traffic coming outside cloudflare ip ranges. But because the load balancer initiates its own connection, I'm unable to match CIDR rules in
CiliumNetworkPolicyto the real IP of the request. Everything is logged out as 10.0.0.4 instead.This is what
uses-proxyprotocolis supposed to prevent and it works great for the most part but if you get into the nitty-gritty details, it doesn't actually change the IP the way a transparent proxy would. It adds a special TCP packet that the receiving application has to interpret. That's why there's a big scary warning in the hetzner UI about using the proxy protocol.this is probably not news to a lot of you but it was to me
And because cilium deals with traffic rules down to the individual packets, it doesn't seem to be compatible with proxy protocol. I haven't used any other CNI so there's a possibility it's a feature Cilium lacks but I'm guessing kube-proxy or other alternatives don't take it into account for their network policy implementations either.
Based on some light research, I think metallb might sidestep this problem entirely since from what I understand it lets you put the load balancing directly in a node you can do filtering on. That's an idea that's increasingly tempting to me after seeing the situation with hetzner load balancers, but I didn't want to spend many more hours making that change and went with Traefik middlewares instead.
It's both an inefficient solution and difficult to get working for all routes globally, you can still hit up the load balancer by IP and get a 404 instead of Forbidden. But it seems to do the job since Traefik has access to the real IP internally when doing filtering.
I've recently started learning Kubernetes after my previous encounters with it blindly running helm commands, so I'm wondering if I've overlooked something obvious or if this is a real problem for other people too.
Beta Was this translation helpful? Give feedback.
All reactions