In the below example, the Austin Engineering cluster is assumed:. The commands below will need to be adjusted to the cluster being configured. This process also uses the acme.sh script to interact with Let's Encrypt. This process uses Let's Encrypt to get a valid, signed certificate to provide SSL. Kubernetes API interactions on port 7443 require SSL. gateway-config-cron Setup SSL automation The file haproxy.j2 needs to be updated based on the SSL certificate and domain name used in the next step. The file gateway-haproxy-config.py needs to be updated with the IP address of one of the master nodes and the TOKEN for the ServiceAccount created above. The shell script gateway-config-cron needs to be updated with the correct path to gateway-haproxy-config.py. The last two files need to be in the same directory. Copy gateway config script and create cron jobĬopy the following files from the /eng_resources directory The IP above could be public, but it really just needs to be routable on the network that needs access to the workloads you run on kubernetes. This first command disables pod scheduling so all resources are available to HAProxy.Ī *. 192.168.12.219 The gateway is setup by kubespray as a node to facilitate pod access using the selected network overlay (e.g. To run the kubectl commands below, first SSH into one of the master nodes through the bastion. Some adjustment may be required to security groups (reference).Copy over gateway config script and setup cron.Create ServiceAccount for gateway API calls.Disable pod scheduling on the gateway node(s).The steps below setup an HAProxy server for this purpose. Setup gatewayĪ gateway server is used to allow incoming traffic into the kubernetes cluster based on services. This meant that HAProxy could route traffic directly to container workloads. flannel, calico, etc.) as the rest of the cluster. When I deployed the kubernetes cluster, I deployed the gateway server(s) as a nodes, so they would have the same network overlay (e.g. If I had been in public cloud, I would probably have used a managed load balancer.
I developed thsi script to provide a gateway into a kubernetes cluster deployed on OpenStack. Server kubenode03 10.1.160.Scripts that make it possible to have HAProxy dynamically configure itself based on the current state of Services in a kubernetes cluster Background # http-request redirect scheme https unless Ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-ticketsĮrrorfile 400 /etc/haproxy/errors/400.httpĮrrorfile 403 /etc/haproxy/errors/403.httpĮrrorfile 408 /etc/haproxy/errors/408.httpĮrrorfile 500 /etc/haproxy/errors/500.httpĮrrorfile 502 /etc/haproxy/errors/502.httpĮrrorfile 503 /etc/haproxy/errors/503.httpĮrrorfile 504 /etc/haproxy/errors/504.http
Stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners I've also tried this with the service type LoadBalancer. Is HAProxy not meant to forward connection requests to the actual Kubernetes nodes in this article? If I now try and visit my HAProxy IP 10.1.160.170, I'm not redirected to my Kubernetes node on port 30438. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE Then create the service: kubectl expose deployment my-nginx -port=80 kubectl get service I can set up an Nginx deployment with: apiVersion: apps/v1 I was hoping to visit my HAProxy IP and be redirected to one of my Kuberenetes nodes that is being load balanced. HAProxy has been set up on a VM separate from my Kubernetes cluster. Everything is working properly between my Kubernetes cluster and HAProxy, from what I can tell. I've gone through the guide and set this up.
Install and configure a multi-master Kubernetes cluster with kubeadm Can someone please skim over this guide and tell me the use case of HAProxy in this guide?