Create kubernetes loadbalancer on self hosted cluster using metallb

Create kubernetes loadbalancer on self hosted cluster using metallb

If you use a self hosted environment and you want to use an ingress controller or a gateway api you will have to have the ability to create a loadbalancing type of service. As the loadbalancing services are tied to a hosted service environment you don't have much options.
One of the best solutions for this issue is to create a loadbalancer using metallb.

Requirements for this are the following:

  • a kubernetes cluster
  • several IP addresses in a network on which the nodes are configured.

Next step is to deploy metallb. In order to do that are going to use hte manifest method:

 # kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.14.5/config/manifests/metallb-native.yaml

Once the manifests are deployed it's time to create the configurations. First are going to deploy the IPPools and second the L2Advertisements.

The IPPool configurtion look something like this:

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: first-pool
  namespace: metallb-system
spec:
  addresses:
  - 10.0.1.50-10.0.1.250

Note: The selected ip's can be any range in the network where the kubernetes nodes are running.
Next, going to deploy the IPAddressPool into the cluster where metallb is installed.

# cat << EOF | kubectl apply -f -
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: first-pool
  namespace: metallb-system
spec:
  addresses:
  - 10.0.1.50-10.0.1.250
EOF

Now it's time to deploy the L2Advertisements. For this are going to use the following manifest:

apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: <some name>
  namespace: metallb-system
spec:
  ipAddressPools:
  - first-pool
  nodeSelectors:
  - matchLabels:
      kubernetes.io/hostname: NodeA
  - matchLabels:
      kubernetes.io/hostname: NodeB

The ipAddressPool can limit the loadbalancer from working on all the nodes of the cluster or on some selected nodes as shown above. If the nodes has more than one interface can speficy the interfaces where the ip address will be assigned to like in this example:

interfaces:
- eth0
- eth

Since my cluster is a small cluster not going limit the ip address to any nodes nor interfaces.

# cat << EOF | kubectl apply -f -
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: <some name>
  namespace: metallb-system
spec:
  ipAddressPools:
  - first-pool

Now our cluster is ready to have an application deployed. I am not going to show how to deploy an nginx. There are plenty of examples out there. For this excersize are going to assume that nginx is deployed using the matchLabels: app: nginx. Going to deploy a LoadBalancer type of service to access the deployed nginx from outside the k8s cluster.

# cat << EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  selector:
    app: mta-sts
  type: LoadBalancer
EOF

Now to check the existance of the loadbalancer need to run the following command:

# kubectl get svc                                           
NAME         TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
nginx        LoadBalancer   10.99.208.67   10.0.1.50    80:30914/TCP   35s

As can be seen the External IP is the first ip address from the range assigned to the loadbalancer.
In order to test if the applications is really working going to run a curl on the ip address assigned to the loadbalancer.

# curl http://10.0.1.50
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.25.5</center>
</body>
</html>

The nginx did not had any application deployed therefore it ended up serving a 404 not found but it prooves that the application running in k8s can be accessed via the ip address assigned to the loadbalancer by the kubernetes.