Nginx
For our exercises we now start with ingress-nginx , an Open Source Ingress Controller that is a safe and one of the most popular choices when you need a simple solution. It is maintained and well-integrated by the Kubernetes project itself, with an option for commercial support provided by NGINX Inc, cf. these version differences explained . It has already been set up in our test cluster beforehand (via helm, by the way, see Setup instructions above) so we can jump right at making use of it.
As the name implies, this solution utilizes the popular Nginx web server as a reverse proxy and load balancer, with the Ingress Controller providing the means to easily configure it to our purposes. Under the hood things are not so simple, of course, and you can view the various components via
kubectl get all --all-namespaces -l app.kubernetes.io/name=ingress-nginx
(note the --all-namespaces), e.g.:
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx pod/ingress-nginx-controller-54bfb9bb-p5l6d 1/1 Running 0 5h38m
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx service/ingress-nginx-controller LoadBalancer 10.96.117.139 172.18.255.3 80:30650/TCP,443:32377/TCP 5h38m
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
ingress-nginx deployment.apps/ingress-nginx-controller 1/1 1 1 5h38m
NAMESPACE NAME DESIRED CURRENT READY AGE
ingress-nginx replicaset.apps/ingress-nginx-controller-54bfb9bb 1 1 1 5h38mBut for now we will just use what the Ingress Controller provides without diving into too much detail on how this will be achieved.
Note
As we will make use of the Ingress EXTERNAL-IP quite a lot, best export it to your environment: export INGRESS_IP_NGINX=<your_Ingress_IP>.
export INGRESS_IP_NGINX=$(kubectl get service --namespace ingress-nginx ingress-nginx-controller -o jsonpath='{.status.loadBalancer.ingress[].ip}')
should do the trick, which you can then verify via echo $INGRESS_IP_NGINX. This variable will not persist over a
logout nor will it spread to other separate sessions, so remember to set it again whenever you (re)connect to your user
VM.
Namespace
With the Ingress Controller still being a pre-defined cluster-wide central solution, we again need to take care to not step on each others’ toes in the following exercises. For that we will prefix some parts of the following resources with a user-specific namespace. For a prod deployment - with only a single Ingress Controller deployed - we could rely on ingress-nginx’ Admission Webhook , though, to strictly avoid any Ingress resource clashes.
Note
As we will use this personal namespace several times as a resource prefix, make it now available for easy consumption
via a variable: export NAMESPACE=$(kubectl config view --minify --output 'jsonpath={..namespace}'); echo $NAMESPACE
This variable will not persist over a logout nor will it spread to other separate sessions, so remember to set it again
whenever you (re)connect to your user VM.
Exercise - Recreate our sample application
Remember how to use the command line to create a Deployment called sampleapp with the image novatec/technologyconsulting-hello-container:v0.1?
You still have that running, i.e. Kubernetes told you that Error from server (AlreadyExists): deployments.apps “sampleapp” already exists? Doesn’t matter, let’s just keep using it then.
And now we will expose that Deployment using a LoadBalancer Service, in effect similarly to what we have covered in Services but with a different command:
kubectl expose deployment sampleapp --type LoadBalancer --port 8080
Milestone: K8S/INGRESS/NGINX-SAMPLEAPP
For good measure, let’s confirm that we can access it just fine. Do you know how?
So far so good:
graph LR;
I{Internet} -->|LoadBalancer:8080| S
subgraph Kubernetes
subgraph your Namespace
S(sampleapp)
end
end
Exercise - Expose our sample application using Ingress
Now create sampleapp-ingress.yaml by executing the following (yes, execute it all at once), utilizing your personal
namespace as a host component, and our Ingress IP by way of nip.io
wildcard DNS as a suffix:
cat <<.EOF > sampleapp-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: sampleapp
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /\$1
spec:
ingressClassName: nginx
rules:
- host: hello.$NAMESPACE.$INGRESS_IP_NGINX.nip.io
http:
paths:
- path: /(.*)
pathType: Prefix
backend:
service:
name: sampleapp
port:
number: 8080
.EOFCreate the Ingress resource by running the following command:
kubectl apply -f sampleapp-ingress.yaml
Milestone: K8S/INGRESS/NGINX-SAMPLEAPP-INGRESS
Verify the IP address is set:
kubectl get ingress sampleapp
NAME CLASS HOSTS ADDRESS PORTS AGE
sampleapp nginx hello.<your_namespace>.<your_Ingress_IP>.nip.io <your_Ingress_IP> 80 2m54sNote
This can take a couple of minutes.
Conceptually this now really similar to what we had done in Services , we could visualize the access path like this:
graph LR;
A{Internet} -->|LoadBalancer:80| N
subgraph Kubernetes
subgraph Ingress Namespace
N(Nginx Ingress)
end
subgraph your Namespace
S(sampleapp)
N -->|ClusterIP:8080| S
end
end
Then verify we can access our sampleapp using Ingress:
curl --verbose http://hello.$NAMESPACE.$INGRESS_IP_NGINX.nip.io/hello; echo
* Trying <your_Ingress_IP>:80...
* Connected to hello.<your_namespace>.<your_Ingress_IP>.nip.io (<your_Ingress_IP>) port 80 (#0)
> GET /hello HTTP/1.1
> Host: hello.<your_namespace>.<your_Ingress_IP>.nip.io
> User-Agent: curl/7.88.1
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Fri, 01 Dec 2023 12:56:15 GMT
< Content-Type: text/plain;charset=UTF-8
< Content-Length: 57
< Connection: keep-alive
<
* Connection #0 to host hello.<your_namespace>.<your_Ingress_IP>.nip.io left intact
Hello World (from sampleapp-65779bd948-6gcm7) to somebodyOK, so this works. Now let’s not expose our sampleapp via a LoadBalancer Service anymore, but restrict access to ClusterIP, i.e. disallow direct access from outside of our cluster:
kubectl delete services sampleapp; kubectl expose deployment sampleapp --type ClusterIP --port 8080
Milestone: K8S/INGRESS/NGINX-SAMPLEAPP-SERVICE
And confirm we can still access it via Ingress:
curl http://hello.$NAMESPACE.$INGRESS_IP_NGINX.nip.io/hello; echo
Hello World (from sampleapp-65779bd948-6gcm7) to somebodyWell, it still works: via Ingress our request gets redirected cluster-internally to the ClusterIP of our Service, and then to the sampleapp pod. But how do we benefit from this? Not really so far with such a simple application, and with only a single application we didn’t even save on external IP addresses required. But let’s dive into more detail.
Exercise - Extend our sample application
Create sampleapp-subpath.yaml from the following file (i.e. do everything we have done for our sampleapp again, but
now a bit differently, and from YAML and not directly via command line):
apiVersion: v1
kind: Service
metadata:
name: sampleapp-subpath
spec:
type: ClusterIP
ports:
- port: 8080
selector:
app: sampleapp-subpath
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: sampleapp-subpath
spec:
replicas: 1
selector:
matchLabels:
app: sampleapp-subpath
template:
metadata:
name: sampleapp-subpath
labels:
app: sampleapp-subpath
spec:
containers:
- name: sampleapp-subpath
env:
- name: PROPERTY
value: everyone via Ingress from a subpath
image: novatec/technologyconsulting-hello-container:v0.1
restartPolicy: AlwaysCreate these resources by running the following command:
kubectl apply -f sampleapp-subpath.yaml
Milestone: K8S/INGRESS/NGINX-SAMPLEAPP-SUBPATH
What would this application serve, and how could you access it right now?
Now extend our sampleapp-ingress.yaml as follows:
- path: /subpath/(.*)
pathType: Prefix
backend:
service:
name: sampleapp-subpath
port:
number: 8080Apply the change:
kubectl apply -f sampleapp-ingress.yaml
Milestone: K8S/INGRESS/NGINX-SAMPLEAPP-SUBPATH-INGRESS
Verify that the Ingress is configured as intended:
kubectl describe ingress sampleapp
Name: sampleapp
Labels: <none>
Namespace: <your_namespace>
Address: <your_Ingress_IP>
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
hello.<your_namespace>.<your_Ingress_IP>.nip.io
/(.*) sampleapp:8080 (172.17.0.4:8080)
/subpath/(.*) sampleapp-subpath:8080 (172.17.0.5:8080)
Annotations: nginx.ingress.kubernetes.io/rewrite-target: /$1
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 51s (x3 over 5m22s) nginx-ingress-controller Scheduled for sync(Please note that even though the Ingress resource references a service name that will also be listed for each path, the IP addresses listed there will be those of the corresponding Pods.)
And then access our application, both normally and the subpath:
curl http://hello.$NAMESPACE.$INGRESS_IP_NGINX.nip.io/hello; echo
curl http://hello.$NAMESPACE.$INGRESS_IP_NGINX.nip.io/subpath/hello; echowhich should yield
Hello World (from sampleapp-65779bd948-6gcm7) to somebody
Hello World (from sampleapp-subpath-7d466c4ccb-jlrcb) to everyone via Ingress from a subpathIn other words, the most specific match succeeds, and we have integrated two slightly different applications, each served by a separate microservice, into a single serving domain, like this:
hello.$NAMESPACE.$INGRESS_IP_NGINX.nip.io:80 -> $INGRESS_IP_NGINX:80 -> / sampleapp:8080
/subpath/ sampleapp-subpath:8080graph LR;
A{Internet} -->|LoadBalancer:80| N
subgraph Kubernetes
subgraph Ingress Namespace
N(Nginx Ingress)
end
subgraph your Namespace
S(sampleapp)
N -->|/<br />ClusterIP:8080| S
U(sampleapp-subpath)
N -->|/subpath/<br />ClusterIP:8080| U
end
end
Of course, with our sampleapp this doesn’t yet make much sense. But think of different microservices each serving a specific version of an API, and having them all accessible at /api/v1/, /api/v2/, /api/v3/ … Or generally think of any multi-microservice-driven application that needs to comply with Same-origin policy , and you find you cannot implement this via simple LoadBalancer Service definitions.
This is where Ingress shines, and if all this reminds you of Application Gateways, then you are absolutely right. Incidentally, the AKS-specific Ingress Controller implementation is called AKS Application Gateway Ingress Controller after all.
So, in line with this, what else can we do with our Ingress?
Exercise - Ingress’ify our ToDo application
Let’s achieve this here now:
todo.$NAMESPACE.$INGRESS_IP_NGINX.nip.io:80 -> todo.$NAMESPACE.$INGRESS_IP_NGINX.nip.io:443
todo.$NAMESPACE.$INGRESS_IP_NGINX.nip.io:443 -> $INGRESS_IP_NGINX:443 -> / todoui:8090
/backend/v1/ todobackend-v1:8080 w/ basic auth (for debugging)
/backend/v2/ todobackend:8080 w/ basic auth (for debugging)graph LR;
A{Internet} -->|LoadBalancer:443 TLS| N
A -->|LoadBalancer:80 redirect to :443| N
subgraph Kubernetes
subgraph Ingress Namespace
N(Nginx Ingress)
end
subgraph your Namespace
U(todoui)
N -->|/<br />ClusterIP:8090| U
O(todobackend-v1)
N -->|/backend/v1/ basic auth<br />ClusterIP:8080| O
B(todobackend)
N -->|/backend/v2/ basic auth<br />ClusterIP:8080| B
U -->|ClusterIP:8080| B
P(postgresdb)
O -->|ClusterIP:5432| P
B -->|ClusterIP:5432| P
end
end
So we will need a redirect, some Ingress backends, basic auth settings, and a certificate for TLS termination. (Of course, normally you’d have versioned all backend Deployments, but let’s just take the easy path here and work with what we already have.)
TLS certificate
Self-signed will suffice for now:
openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 \
-keyout todo.$NAMESPACE.$INGRESS_IP_NGINX.nip.io.key \
-out todo.$NAMESPACE.$INGRESS_IP_NGINX.nip.io.crt \
-subj "/CN=todo.$NAMESPACE.$INGRESS_IP_NGINX.nip.io"which will output some line(s) with symbols indicating its generating data.
Verify the certificate’s CN:
openssl x509 -in todo.$NAMESPACE.$INGRESS_IP_NGINX.nip.io.crt -noout -subject
subject=CN = todo.<your_namespace>.<your_Ingress_IP>.nip.ioLoad this as a Kubernetes secret:
kubectl create secret tls todo-nginx-tls-secret --key todo.$NAMESPACE.$INGRESS_IP_NGINX.nip.io.key --cert todo.$NAMESPACE.$INGRESS_IP_NGINX.nip.io.crt
secret/todo-nginx-tls-secret createdAnd verify it is present:
kubectl describe secrets todo-nginx-tls-secret
Name: todo-nginx-tls-secret
Namespace: <your_namespace>
Labels: <none>
Annotations: <none>
Type: kubernetes.io/tls
Data
====
tls.crt: 1180 bytes
tls.key: 1704 bytesRemember from Exercise - create a Secret how to verify the contents?
Milestone: K8S/INGRESS/NGINX-TODOAPP-CERT
Basic auth
Well, we do not have htpasswd installed locally. No problem, Docker to the rescue:
Info
This could take some time because your local docker daemon has to download the httpd:latest image!
docker run -it --rm -v $(pwd):/tempdir -w /tempdir httpd:latest htpasswd -c auth backenddebugger
New password: password
Re-type new password: password
Adding password for user backenddebuggerYes, it is important that the file generated is named auth (actually - that the secret which we are about to create
has a key data.auth), and yes, let’s use the literal password as password, just for the sake of simplicity.
Verify the contents which have conveniently been placed into our current work directory:
cat auth
backenddebugger:$apr1$K3X6TKeX$RQaGr8FqnXYxgDU4Z4lBa0Load this as a Kubernetes secret, with a generic name as we a re going to reuse this later with Traefik:
kubectl create secret generic todo-backend-basic-auth --from-file auth
secret/todo-backend-basic-auth createdAnd verify it is present and it contains the correct data:
kubectl describe secrets todo-backend-basic-auth
Name: todo-backend-basic-auth
Namespace: <your_namespace>
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
auth: 54 byteskubectl get secrets todo-backend-basic-auth -o jsonpath='{.data.auth}' | base64 --decode
backenddebugger:$apr1$K3X6TKeX$RQaGr8FqnXYxgDU4Z4lBa0Milestone: K8S/INGRESS/NGINX-TODOAPP-AUTH
Ingress
Thus, now it is time to plug it all together. Create a file todoapp-ingress.yaml by executing the following (yes,
execute it all at once), again utilizing your personal namespace as a host prefix:
cat <<.EOF > todoapp-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: todo-nginx
spec:
ingressClassName: nginx
tls:
- hosts:
- todo.$NAMESPACE.$INGRESS_IP_NGINX.nip.io
secretName: todo-nginx-tls-secret
rules:
- host: todo.$NAMESPACE.$INGRESS_IP_NGINX.nip.io
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: todoui
port:
number: 8090
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: todo-nginx-backend-basic-auth
annotations:
nginx.ingress.kubernetes.io/auth-realm: "Authentication Required - ToDo App Backend"
nginx.ingress.kubernetes.io/auth-secret: todo-backend-basic-auth
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/rewrite-target: /\$1
spec:
ingressClassName: nginx
tls:
- hosts:
- todo.$NAMESPACE.$INGRESS_IP_NGINX.nip.io
secretName: todo-nginx-tls-secret
rules:
- host: todo.$NAMESPACE.$INGRESS_IP_NGINX.nip.io
http:
paths:
- path: /backend/v1/(.*)
pathType: Prefix
backend:
service:
name: todobackend-v1
port:
number: 8080
- path: /backend/v2/(.*)
pathType: Prefix
backend:
service:
name: todobackend
port:
number: 8080
.EOFApply it with kubectl apply -f todoapp-ingress.yaml and verify what has been created:
kubectl describe ingress todo-nginx todo-nginx-backend-basic-auth
[...]
TLS:
todo-nginx-tls-secret terminates todo.<your_namespace>.<your_Ingress_IP>.nip.io
Rules:
Host Path Backends
---- ---- --------
todo.<your_namespace>.<your_Ingress_IP>.nip.io
/ todoui:8090 (172.17.0.8:8090)
[...]
TLS:
todo-nginx-tls-secret terminates todo.<your_namespace>.<your_Ingress_IP>.nip.io
Rules:
Host Path Backends
---- ---- --------
todo.<your_namespace>.<your_Ingress_IP>.nip.io
/backend/v1/(.*) todobackend-v1:8080 (<error: endpoints "todobackend-v1" not found>)
/backend/v2/(.*) todobackend:8080 (172.17.0.9:8080)
Annotations: nginx.ingress.kubernetes.io/auth-realm: Authentication Required - ToDo App Backend
nginx.ingress.kubernetes.io/auth-secret: todo-backend-basic-auth
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/rewrite-target: /$1
[...]Milestone: K8S/INGRESS/NGINX-TODOAPP-INGRESS
Verification
Well, but does it actually work as intended? Let’s find out by first verifying TLS termination:
curl --verbose --insecure https://todo.$NAMESPACE.$INGRESS_IP_NGINX.nip.io/ | head -n 20
[...]
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server accepted h2
* Server certificate:
* subject: CN=todo.<your_namespace>.<your_Ingress_IP>.nip.io
* start date: Dec 1 13:01:46 2023 GMT
* expire date: Nov 30 13:01:46 2024 GMT
* issuer: CN=todo.<your_namespace>.<your_Ingress_IP>.nip.io
* SSL certificate verify result: self signed certificate (18), continuing anyway.
[...]
<!DOCTYPE HTML>
<html>
<head>
<title>Schönste aller Todo Listen</title>
[...]Then let’s verify the redirect HTTP->HTTPS is present:
curl --verbose http://todo.$NAMESPACE.$INGRESS_IP_NGINX.nip.io/
[...]
< HTTP/1.1 308 Permanent Redirect
[...]
< Location: https://todo.<your_namespace>.<your_Ingress_IP>.nip.io/
[...]And confirm the contents on redirect:
curl --silent --location --insecure http://todo.$NAMESPACE.$INGRESS_IP_NGINX.nip.io/ | head -n 4
<!DOCTYPE HTML>
<html>
<head>
<title>Schönste aller Todo Listen</title>So, TLS termination, including a redirect HTTP->HTTPS, seems to work just fine.
Tip
More details / checks wanted? Run
docker run --rm -it drwetter/testssl.sh:latest https://todo.$NAMESPACE.$INGRESS_IP_NGINX.nip.io/
and enjoy.
Now let’s check basic auth on backend:
curl --verbose --insecure https://todo.$NAMESPACE.$INGRESS_IP_NGINX.nip.io/backend/v2/todos/; echo
[...]
< HTTP/2 401
< date: Fri, 01 Dec 2023 13:04:41 GMT
< content-type: text/html
< content-length: 172
< www-authenticate: Basic realm="Authentication Required - ToDo App Backend"
[...]OK, we indeed need to authenticate, so let’s try this while inserting sample data (if none already present):
curl --request POST --insecure https://todo.$NAMESPACE.$INGRESS_IP_NGINX.nip.io/backend/v2/todos/testabc --user backenddebugger:password; echo
added testabcAnd query data from backend:
curl --silent --insecure https://todo.$NAMESPACE.$INGRESS_IP_NGINX.nip.io/backend/v2/todos/ --user backenddebugger:password; echo
["testabc"]All in all, now looking back at what we attempted to achieve we can see that we are almost there:
todo.$NAMESPACE.$INGRESS_IP_NGINX.nip.io:80 -> todo.$NAMESPACE.$INGRESS_IP_NGINX.nip.io:443
todo.$NAMESPACE.$INGRESS_IP_NGINX.nip.io:443 -> $INGRESS_IP_NGINX:443 -> / todoui:8090
/backend/v1/ todobackend-v1:8080 w/ basic auth (for debugging)
/backend/v2/ todobackend:8080 w/ basic auth (for debugging)The redirect HTTP->HTTPS is in place, TLS termination works and we can access our ToDo application just fine, and the basic-auth-protected debugging access to the most-recent backend version is present as well.
But what about /backend/v1/? After all, when creating the Ingress route Kubernetes had already mentioned <error: endpoints “todobackend-v1” not found>. Well, that is left as an exercise for the reader now … :-)
Exercise - Further options and outlook
Of course, a wide array of options exists for customization: default backend, errors, headers, ciphers, Let’s Encrypt integration, Mod Security WAF, auto-updating cloud based DNS entries, more than just handling HTTP/HTTPS, an Admission Webhook to validate resources, … Check the upstream documentation for details.
Just for reference, some more options will be used when dealing with Service Meshes : Opt-In Canary .
And to see how this all works altogether at the end we can always check the Nginx configuration that was put together by the Ingress Controller:
kubectl exec -n ingress-nginx deployment/ingress-nginx-controller -- cat /etc/nginx/nginx.conf | less
# Configuration checksum: 14635718279128137455
[...]
## start server hello.<your_namespace>.<your_Ingress_IP>.nip.io
server {
server_name hello.<your_namespace>.<your_Ingress_IP>.nip.io ;
listen 80 ;
listen 443 ssl http2 ;
[...]
location ~* "^/subpath/(.*)" {
set $namespace "<your_namespace>";
set $ingress_name "sampleapp";
set $service_name "sampleapp-subpath";
set $service_port "8080";
set $location_path "/subpath/(.*)";
[...]Info
If you get an error message like this:
Error from server (Forbidden): pods "ingress-nginx-controller-6967fb79f6-wcdhk" is forbidden: User "system:serviceaccount:<your_namespace>-ns:<your_namespace>-serviceaccount" cannot create resource "pods/exec" in API group "" in the namespace "ingress-nginx"don’t be surprised, in our education environment the participants are restricted and cannot see everything a cluster admin is allowed to see.