GatewayAPI with HTTPRoute and TLS using cilium
Configuring GatewayAPI using cilium technology on a local Kubernetes cluster that is initialized with kind.
In this post I will explain how to setup TLS for the gateway that will be used with the HTTPRoute.
As a reminder Gateway API is an official Kubernetes project focused on L4 and L7 routing in Kubernetes. This project represents the next generation of Kubernetes Ingress, Load Balancing, and Service Mesh APIs. From the outset, it has been designed to be generic, expressive, and role-oriented.
In this post I have showed how to create a local k8s cluster using kind and in this post you were presented how can we use cilium to create a LoadBalancer. Since that post cilium has evolved to the version 1.16.5 and the CiliumLoadBalancingIPPool can assign an ip from a set range without hardcoding the LoadBalancer's external IP into the Loadbalancer type of service. That can be done like this:
#cat << EOF | kubectl apply -f -
apiVersion: "cilium.io/v2alpha1"
kind: CiliumLoadBalancerIPPool
metadata:
name: "test-pool"
spec:
blocks:
- start: "172.18.0.10"
stop: "172.18.0.50"
EOF
That being said the certificate will be handled by cert-manager that I showed how to configure here. However for this excersize will be using a self signed certificate therefore the ClusterIssuer will look like this:
#cat << EOF | kubectl apply -f -
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: selfsigned-cluster-issuer
namespace: cert-manager
spec:
selfSigned: {}
EOF
For this tutorial will be using cilium's gatewayAPI implementation that is using envoy proxy. Therefore will need to update the cilium deployment with some options to enable this functionality:
# cilium install --version 1.16.5 --set kubeProxyReplacement=true \
--set gatewayAPI.enabled=true
If you already have cilium deployed you can simply upgrade it by using upgrade
instead of install
in the installation procedure.
Also have some dependency on needing to install the gatewayAPI CRD's before we run the cilium install/upgrade otherwise will have to rerun it again.
#kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.2.0/standard-install.yaml
#kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v1.2.0/config/crd/experimental/gateway.networking.k8s.io_tlsroutes.yaml
Since the tlsroutes are experimental they are not included in the standard crd's and need to be deployed separately. Make sure you use the same version as for the standard crd's.
Once everything is deployed properly you should be able to see the GatewayClass that was accepted.
#kubectl get gatewayclass
NAME CONTROLLER ACCEPTED AGE
cilium io.cilium/gateway-controller True 8h
Now let's make the certificate and attach it to the gateway.
#cat << EOF | kubectl apply -f -
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: grafana-tls
namespace: kube-system
spec:
dnsNames:
- gfn.zozoo.io
issuerRef:
kind: ClusterIssuer
name: selfsigned-cluster-issuer
secretName: grafana-tls
EOF
After a few second checking the certificate it should be ready.
#kubectl get certificates -n kube-system
NAME READY SECRET AGE
grafana-tls True grafana-tls 8h
Now let's deploy the Gateway.
#cat << EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: cilium-gateway
namespace: kube-system
spec:
gatewayClassName: cilium
listeners:
- allowedRoutes:
namespaces:
from: Selector
selector:
matchLabels:
gw-access: "true"
name: http
port: 80
protocol: HTTP
- allowedRoutes:
namespaces:
from: Selector
selector:
matchLabels:
gw-access: "true"
name: https
port: 443
protocol: HTTPS
tls:
certificateRefs:
- group: ""
kind: Secret
name: grafana-tls
mode: Terminate
As can be seen we have set up two listeners one on port 80 and one on 443 for https. Now we are ready to configure the HTTPRoutes in order to access the application which would be grafana that it's running on the monitoring namespace. As can be seen the listeners are set up with some selectors that would allow cross namespace routing that would happen to all namespaces that has the label gw-access=true
. Note: the label can be anything you don't have to use this one.
#cat << EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: grafana
namespace: monitoring
spec:
hostnames:
- gfn.zozoo.io
parentRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: cilium-gateway
namespace: kube-system
sectionName: https
rules:
- backendRefs:
- group: ""
kind: Service
name: grafana
port: 80
weight: 1
matches:
- path:
type: PathPrefix
value: /
- matches:
- path:
type: PathPrefix
value: /
The HTTPRoute has to be created in the same namespace where the application is running that doesn't necesarly mean it has to be in the same namespace where the gateway is running at.
As a bonus step we will create another HTTPRoute to redirect http to https.
#cat << EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-filter-redirect
namespace: monitoring
spec:
parentRefs:
- name: cilium-gateway
sectionName: http
namespace: kube-system
hostnames:
- gfn.zozoo.io
rules:
- filters:
- type: RequestRedirect
requestRedirect:
scheme: https
statusCode: 301
EOF
After checking the HTTPRoutes we can see there are two entries:
#kubectl get HTTPRoutes -n monitoring
NAME HOSTNAMES AGE
grafana ["gfn.zozoo.io"] 10h
http-filter-redirect ["gfn.zozoo.io"] 71s
Let's run some curl from one of the nodes as this k8s cluster is running on docker the loadbalancer ip cannot be accessed from the host. Also since there is no real domain configured for it we going to add some ip mapping to the /etc/hosts of the control-plane.
#kubectl node-shell kind-control-plane
spawning "nsenter-jl5p6p" on "kind-control-plane"
If you don't see a command prompt, try pressing enter.
root@kind-control-plane:/#
#echo "172.18.0.10 gfn.zozoo.io" | tee /etc/hosts
#curl -H "Host: gfn.zozoo.io" http://172.18.0.10/ -LIk
HTTP/1.1 301 Moved Permanently
location: https://gfn.zozoo.io:443/
date: Tue, 24 Dec 2024 12:17:43 GMT
server: envoy
transfer-encoding: chunked
HTTP/1.1 302 Found
cache-control: no-store
content-type: text/html; charset=utf-8
location: /login
x-content-type-options: nosniff
x-frame-options: deny
x-xss-protection: 1; mode=block
date: Tue, 24 Dec 2024 12:17:43 GMT
x-envoy-upstream-service-time: 1
server: envoy
transfer-encoding: chunked
HTTP/1.1 200 OK
cache-control: no-store
content-type: text/html; charset=UTF-8
x-content-type-options: nosniff
x-frame-options: deny
x-xss-protection: 1; mode=block
date: Tue, 24 Dec 2024 12:17:43 GMT
x-envoy-upstream-service-time: 5
server: envoy
transfer-encoding: chunked
As can be seen first there is a redirect to https then some other redirect to the /login
which ends up in a 200 OK
.
Everything works as expected.