Install and configure Haproxy ingress controller on kubernetes
In this post I will explain how to install and configure the haproxy ingress In order to have this work you need a fully installed kubernetes cluster a workstation with Helm 3.6 preferably 3.7 installed. We need to install the haproxy repository to the helm chart.
#helm repo add haproxytech https://haproxytech.github.io/helm-charts
#helm repo update
Now we are ready to install the haproxy ingress controller using helm. Will do that in a new namespace. But first we download the values.yml
from here and we will start editing it to configure the ingress controllers. Will show only the lines that were changed.
...
https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/
kind: DaemonSet # can be 'Deployment' or 'DaemonSet'
...
## Custom configuration for Controller
## ref: https://github.com/haproxytech/kubernetes-ingress/tree/master/documentation
config:
timeout-connect: "250ms"
scale-server-slots: "5"
dontlognull: "true"
logasap: "true"
...
## Controller Service configuration ## Controller Service configuration
## ref: https://kubernetes.io/docs/concepts/services-networ ## ref: https://kubernetes.io/docs/concepts/services-networ
service: service:
enabled: false # set to false when controller.kind is | enabled: true # set to false when controller.kind is
type: NodePort # can be 'NodePort' or 'LoadBalancer'
...
## Controller DaemonSet configuration
## ref: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/
daemonset:
useHostNetwork: true # also modify dnsPolicy accordingly
useHostPort: true
hostPorts:
http: 80
https: 443
stat: 1024
## Controller deployment strategy definition
## ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
...
## Set additional environment variables
extraEnvs:
## Set TZ env to configure timezone on controller containers
- name: TZ
value: "Etc/UTC"
...
## Default 404 backend
defaultBackend:
enabled: false
name: default-backend
replicaCount: 2
...
## Set additional environment variables
extraEnvs:
## Set TZ env to configure timezone on controller containers
- name: TZ
value: "Etc/UTC"
Basically what we are configuring is the ingest controller type. We are going with DaemonSet and then configure the usage of Host networking. The service type has to be NodePort (default option). The LoadBalacer option can be used if we set it up in a cloud then the cloud provider's loadbalancer will provide the public IP for you. If you are running the kubernetes cluster in some bare metal or virtual machines then you need to configure it as a NodePort. The rest of the options are some personal preferences like disabling the default backend or setting up the number of backend server scale list which is by default has a value of 42. I reduced it to 5 knowing that I would never run more than 5 pods for a service. Now it is the time to deploy the whole thing. To get the proper version first I need to check what it the latest version of the ingress controller
#helm search repo haproxytech
NAME CHART VERSION APP VERSION DESCRIPTION
haproxytech/haproxy 1.9.0 2.5.0 A Helm chart for HAProxy on Kubernetes
haproxytech/kubernetes-ingress 1.18.1 1.7.4 A Helm chart for HAProxy Kubernetes Ingress Con...
#helm install ingress --create-namespace ingress-controller haproxytech/kubernetes-ingress --version 1.18.1 -f values.yaml
Release "ingress" has been upgraded. Happy Helming!
NAME: ingress
LAST DEPLOYED: Sat Jan 15 04:22:58 2022
NAMESPACE: ingress-controller
STATUS: deployed
REVISION: 5
TEST SUITE: None
NOTES:
HAProxy Kubernetes Ingress Controller has been successfully installed.
Controller image deployed is: "haproxytech/kubernetes-ingress:1.7.4".
Your controller is of a "DaemonSet" kind. Your controller service is running as a "NodePort" type.
RBAC authorization is enabled.
Controller ingress.class is set to "haproxy" so make sure to use same annotation for
Ingress resource.
Service ports mapped are:
- name: http
containerPort: 80
protocol: TCP
hostPort: 80
- name: https
containerPort: 443
protocol: TCP
hostPort: 443
- name: stat
containerPort: 1024
protocol: TCP
hostPort: 1024
Node IP can be found with:
$ kubectl --namespace ingress-controller get nodes -o jsonpath="{.items[0].status.addresses[1].address}"
The following ingress resource routes traffic to pods that match the following:
* service name: web
* client's Host header: webdemo.com
* path begins with /
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-ingress
namespace: default
annotations:
ingress.class: "haproxy"
spec:
rules:
- host: webdemo.com
http:
paths:
- path: /
backend:
name: web
port:
number: 80
After checking that the pods have started with no issue and are in running state we can start deploying the ingress object to make the website accessible. We are creating an ingress manifest file.
#cat ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-ingress
namespace: default
annotations:
ingress.class: "haproxy"
spec:
rules:
- host: example.com
http:
paths:
- path: /
backend:
name: web
port:
number: 80
#kubectl apply -f ingress.yaml
Note: The backend.name has to be the service name configured for that deployment. It can be of type ClusterIP or NodePort.
Now depending of how the kubernetes cluster is configured. If the nodes have public ips and the site can be accessed by browsing the ip's of any node. However if the cluster is configured on internal natted ip's you need to have a proxy or loadbalancer that is configured to do loadbalancing of the tcp level not application level. In this case we will configure a haproxy on an external server that has both public and private IP's
#cat /etc/haproxy/haproxy.cfg
global
log 127.0.0.1 local0 info
defaults
timeout connect 8000
listen haproxy-ingress-80
log global
bind 51.79.101.163:80
mode tcp
balance roundrobin
server haproxy-ingress-controller1 <node1_ip>:80
server haproxy-ingress-controller2 <node2_ip>:80
...
listen haproxy-ingress-443
log global
bind <public ip>:443
mode tcp
balance roundrobin
server haproxy-ingress-controller1 <node1_ip>:443
server haproxy-ingress-controller2 <node2_ip>:443
...
listen haproxy-ingress-1024
log global
bind <public ip>:1024
mode tcp
balance roundrobin
server haproxy-ingress-controller1 <node1_ip>:1024
server haproxy-ingress-controller2 <node2_ip>:1024
...
Now it is time to test the ingress.
#curl -H 'Host: example.com' http://<public ip> -IL
HTTP/2 200
date: Sat, 15 Jan 2022 02:57:20 GMT
content-type: text/html;charset=utf-8
set-cookie: JSESSIONID=node016hlipqllr2wfbtpvw4ynphof38092.node0; Path=/; HttpOnly; SameSite=None
expires: Thu, 01 Jan 1970 00:00:00 GMT
content-length: 3259
server: Jetty(9.4.38.v20210224)
set-cookie: SERVERID=SRV_2; path=/
cache-control: private