Using a Microk8s Kubernetes Ingress as a Reverse Proxy for External Web Servers

Fairly early in the Kubernetes journey you are told about Ingress Controllers. They take an incoming connection (usually HTTP or HTTPS) and direct it to one or more services based on the path referred to in the incoming connection. In the case of the NGINX Ingress Controller it is literally a reverse proxy that is used as a frontend for services.

If you happen to need a reverse proxy (for example to provide authentication for your Let's Encrypt SSL certificate automatic renewal) it looks tantalisingly like it should be easy to do this from your Microk8s cluster. And with the benefit of MetalLB (Bare metal Load Balancer) it should be possible for this to keep working if one of the nodes should fail.

This turned out to be far harder than expected. Only by combining a lot of separate examples and parts was it possible to accomplish this. Furthermore, this functionality is only really intended for use while you are in process of migrating an external service into a Kubernetes cluster.

Building the reverse proxy



There are several parts to making this work on a microk8s cluster:
  • Load Balanced IP Addresses
  • Exposing the Ingress
  • Defining the Ingress
  • Creating a service and endpoint for the external web server

This article assumes that you have aliased kubectl to microk8s.kubectl

The setup



In this example:

  • The microk8s cluster and the host we want to expose (myhost.example.com) is behind a NAT and we have a split DNS
  • The cluster will have a MetalLB IP address of 10.0.0.60
  • The NAT gateway will forward http to 10.0.0.60
  • myhost.example.com points to 10.0.0.6 on the the inside
  • myhost.example.com points to the outside address of the gateway on the the outside

Load Balanced IP Addresses



Once you have your Microk8s cluster set up it is easy to add MetalLB and define a range of addresses to be available on the hosts of your cluster.

microk8s enable metallb:10.0.0.60-10.0.0.69


This example configures the IP range to include addresses from 10.0.0.60 to 10.0.0.69. If you leave out the range, you will be prompted to type in a range

Exposing the Ingress



Enable the Ingress controller

microk8s.enable ingress


Expose the Ingress to requests from outside the cluster using a load balanced IP

The following YAML definition (ingress-service.yaml):
apiVersion: v1
kind: Service
metadata:
name: ingress
namespace: ingress
spec:
selector:
name: nginx-ingress-microk8s
type: LoadBalancer
loadBalancerIP: 10.0.0.60
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 443


can be loaded with kubectl apply -f ingress-service.yaml

It is discussed in an article by Jonathon at https://gist.github.com/djjudas21/ca27aab44231bdebb0e72d30e00553ff

$ kubectl -n ingress get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress LoadBalancer 10.152.183.111 10.0.0.60 80:31533/TCP,443:32442/TCP 4d1h


So now we have an IP address and an HTTP (also an HTTPS port) that can be accessed from outside the cluster.

Defining the Ingress


This is in the v1 style
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: myhost-services
spec:
  rules:
  - host: "myhost.example.com"
    http:
      paths:
      - path: "/"
        pathType: Prefix
        backend:
          service:
            name: myhost-http
            port:
              number: 80

To load this save the above to myhost-services.yaml and kubectl apply -f myhost-services.yaml

We have a host in our domain that has an internal IP address and an external address is delivered to our gateway which forwards it to the load balanced IP address.

Any URL going to the load balanced IP address on port 80 will be sent to the service myhost-http.

Creating a service and endpoint for the external web server



And the secret sauce to getting this to go to your host is to define an endpoint and a service to deliver it

apiVersion: v1
kind: Endpoints
metadata:
name: myhost-http
subsets:
- addresses:
- ip: 10.0.0.6
ports:
- port: 80
---
apiVersion: v1
kind: Service
metadata:
name: myhost-http
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80


To load this save the above to myhost.yaml and kubectl apply -f myhost.yaml

Testing



You can test it using curl with the resolve flag

% curl http://myhost.example.com--resolve myhost.example.com:80:10.0.0.60
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>macOS Server</title>
<base target="_parent" />
<style type="text/css" media="screen">
html,body,div,ul,ol,li,dl,dt,dd,h1,h2,h3,h4,h5,h6,pre,form,p,blockquote,fieldset,input,abbr,article,aside,command,details,figcaption,figure,footer,header,hgroup,mark,meter,nav,output,progress,section,summary,time {
margin: 0px;
padding: 0px;


Conclusion



This relatively simple task was considerably more complex than expected:
  • In part because declarative languages are great for describing something; but they are not good at helping you describe the right thing
  • The format of the Ingress descriptions v1beta1 was deprecated part way through the work
  • There are many flavours of Kubernetes and Ingress controllers, hosted environments are slightly different from bare metal; so finding a good example is hard
  • Debugging required both the use of curl (for testing from outside the cluster) and a host container running inside the cluster
  • Ingresses while simple in concept are bedevilled by the implementation details