Services
Now in order to access the app from outside of the Pod, or from in between Pods, you need to expose it to the network using a service.
Exercise - Inspect YAML files for services
Similar to the previous exercises there will be complete files for your reference and one with gaps to fill out yourself.
In the exercise directory you will find 3 files to deploy services.
ls -ltr *-service.yaml
to get the following overview:
-rw-rw-r-- 1 novatec novatec 140 Dec 1 10:19 todobackend-service.yaml
-rw-rw-r-- 1 novatec novatec 136 Dec 1 10:19 postgres-service.yaml
-rw-rw-r-- 1 novatec novatec 133 Dec 1 10:19 todoui-service.yamlHave a look at the files for UI and backend:
cat todoui-service.yaml
apiVersion: v1
kind: Service
metadata:
name: todoui
spec:
type: LoadBalancer
ports:
- port: 8090
selector:
app: todouicat todobackend-service.yaml
apiVersion: v1
kind: Service
metadata:
name: todobackend
spec:
type: ClusterIP
ports:
- port: 8080
selector:
app: todobackendYou will notice they differ in two points. The type of service (ClusterIP, LoadBalancer) and the exposed port information.
Exercise - Change the kubectl overview
In case you still got the separate window with the “watch kubectl” call running, it is now a good time to update it.
The previous version was running:
watch -n1 kubectl get deployment,replicaset,pod
Interrupt it with Ctrl+C
As we are looking at services objects you can simply add them to the list:
watch -n1 kubectl get deployment,replicaset,pod,service
Alternatively you may run:
watch -n1 kubectl get all
It shows the same objects, but in different order. Pick whatever you prefer.
Exercise - Apply the files
Go forward and create the services:
kubectl apply -f todoui-service.yaml
Milestone: K8S/SERVICES/TODOUI
kubectl apply -f todobackend-service.yaml
Milestone: K8S/SERVICES/TODOBACKEND
The service section of the “kubectl get” output will now change to:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/todobackend ClusterIP 10.0.3.171 <none> 8080/TCP 10s
service/todoui LoadBalancer 10.0.135.115 20.23.133.170 8090:32023/TCP 16sYou can see that there are two new services, one of type ClusterIP and one of type LoadBalancer, with the former only being available within the Kubernetes Cluster and the latter also being available on an external IP address.
Tip
In case you have many services it might become difficult to browse through the list. Custom sorting could help then,
e.g. kubectl get services --sort-by .spec.clusterIP.
Exercise - Complete the YAML for the PostgresDB service
The yaml file for the database requires some editing. Having the following postgres yaml file, please fill in the spaces (------) with the suitable content, in order to create a Service for the database. It is given to you in the following format. Try to fill out yourself or look at the solution below.
nano postgres-service.yaml
apiVersion: v1
kind: ------
metadata:
name: postgresdb
spec:
type: ------
ports:
- port: ------
selector:
app: postgresdbAfter you have filled in the blanks, tell kubernetes to create the service by running the apply command.
kubectl apply -f postgres-service.yaml.
The service section of the “kubectl get” output will now change to:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/postgresdb ClusterIP 10.0.124.93 <none> 5432/TCP 7s
service/todobackend ClusterIP 10.0.3.171 <none> 8080/TCP 70s
service/todoui LoadBalancer 10.0.135.115 20.23.133.170 8090:32023/TCP 76sMilestone: K8S/SERVICES/POSTGRES
Exercise - Access the web page
At the end, our Deployments and Services will be connected like this:
graph LR;
A{Internet} -->|LoadBalancer:8090| B
subgraph Kubernetes
subgraph s1["Deployment -> Pod"]
B(todoui)
end
subgraph s2["Deployment -> Pod"]
C(todobackend)
B -->|ClusterIP:8080| C
end
subgraph s3["Deployment -> Pod"]
D(postgresdb)
C -->|ClusterIP:5432| D
end
end
Now make sure you can access the web page within your cli, use curl for this:
curl <EXTERNAL-IP>:<PORT> | head
You will be able to access this through your local browser then at <EXTERNAL-IP>:<PORT> (or, alternatively, if some
local tool blocks accessing raw IP addresses, at <EXTERNAL-IP>.nip.io:<PORT>, making use of the
nip.io wildcard DNS resolver
and you should see the application like in the following picture:
Exercise - Put the web page behind a reverse proxy
Of course, if we don’t like the fact that our todoui is only reachable on a high port, we could expose it on port 80 as well, thus creating yet another Service in addition to the previously-defined Service, using a single ad-hoc command:
kubectl expose deployment todoui --type LoadBalancer --port 80 --target-port 8090 --name todoui-port80
which will add this:
graph LR;
A{Internet} -->|old LoadBalancer:8090| B
A -->|new LoadBalancer:80| B
subgraph Kubernetes
subgraph s1["Deployment -> Pod"]
B(todoui)
end
subgraph s2["Deployment -> Pod"]
C(todobackend)
B -->|ClusterIP:8080| C
end
subgraph s3["Deployment -> Pod"]
D(postgresdb)
C -->|ClusterIP:5432| D
end
end
Milestone: K8S/SERVICES/TODOUI-PORT80
Check the resulting new service (beware, it might take a few moments for the external IP address to be allocated):
kubectl get service todoui-port80
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
todoui-port80 LoadBalancer 10.0.242.62 <pending> 80:30306/TCP 6sand access it just fine:
curl --silent <EXTERNAL-IP> | head -n 4
<!DOCTYPE HTML>
<html>
<head>
<title>Schönste aller Todo Listen</title>However, let’s say we don’t like to just expose our application on the standard HTTP port, but instead we want to apply some means of protection first. That could be done by a reverse proxy, so we are going to put an Nginx instance in front of our todoui now, like this:
graph LR;
A{Internet} -->|old LoadBalancer:8090| B
A -->|new LoadBalancer:80| N
subgraph Kubernetes
subgraph s1["Deployment -> Pod"]
N(Nginx Reverse Proxy)
end
subgraph s2["Deployment -> Pod"]
B(todoui)
N -->|ClusterIP:8090| B
end
subgraph s3["Deployment -> Pod"]
C(todobackend)
B -->|ClusterIP:8080| C
end
subgraph s4["Deployment -> Pod"]
D(postgresdb)
C -->|ClusterIP:5432| D
end
end
A reverse proxy is a type of proxy server that retrieves resources on behalf of a client. These resources are then returned to the client, appearing as if they originated from the server itself. This allows for shielding an application from the clients for additional protection against several attack vectors or for providing additional features (e.g. TLS termination). Quite often, popular and battle-proven web servers, just like Nginx, are used for this purpose.
So, first delete the ad-hoc service again (well, we could keep it running in parallel, but let’s save external IP addresses now):
kubectl delete service todoui-port80
Milestone: K8S/SERVICES/TODOUI-PORT80-RM
And now - again ad-hoc - create a rather minimal Nginx configuration encapsulated in a ConfigMap for use in a Pod, via the following command (yes, execute it all at once):
kubectl apply -f - <<.EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
data:
nginx.conf: |-
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
keepalive_timeout 65;
upstream todoui {
server todoui:8090; # service name and port of our Kubernetes service
}
server {
listen 80;
location / {
proxy_pass http://todoui;
}
}
}
.EOFMilestone: K8S/SERVICES/REVERSEPROXY-CONFIGMAP
Yes, this config does not contain any actual means of protection, but setting up e.g. a real web application firewall (WAF) goes beyond the scope of these lectures, so we are just going to illustrate the principle of networking the services.
Remember, just like when we created our first ConfigMap you can view ConfigMap contents via
kubectl get configmaps [...] and kubectl describe configmap ...
Now let’s create a Deployment that makes use of the config encapsulated in this ConfigMap, also ad-hoc (yes, execute it all at once):
kubectl apply -f - <<.EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: reverseproxy
spec:
replicas: 1
selector:
matchLabels:
app: reverseproxy
template:
metadata:
labels:
app: reverseproxy
spec:
containers:
- image: nginx:alpine
name: reverseproxy
ports:
- containerPort: 80
volumeMounts:
- name: nginx-reverseproxy-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
volumes:
- name: nginx-reverseproxy-config
configMap:
name: nginx-config
.EOFMilestone: K8S/SERVICES/REVERSEPROXY-DEPLOYMENT
Please note how this ConfigMap gets mapped to a volume which in turn will be mounted just right where Nginx expects to find its configuration. In other words, we can use a generic Nginx container image and still configure the instance fully to our liking.
And expose this new deployment now:
kubectl expose deployment reverseproxy --type LoadBalancer --port 80
Milestone: K8S/SERVICES/REVERSEPROXY-EXPOSE
Check the resulting new service (beware, it might take a few moments for the external IP address to be allocated):
kubectl get service reverseproxy
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
reverseproxy LoadBalancer 10.0.97.53 20.82.83.63 80:32285/TCP 9sand access it just fine:
curl --verbose <EXTERNAL-IP> | head -n 20
[...]
< HTTP/1.1 200
< Server: nginx/1.25.3
[...]
<!DOCTYPE HTML>
<html>
<head>
<title>Schönste aller Todo Listen</title>
[...]As you can see our Nginx is in place and reverse-proxying our request as it should. But would we really set this up like we did?
Well, we could, but after all it would be rather wasteful to have a separate Deployment and LoadBalancer Service with a separate external IP address for each application that we’d like to protect. Of course, we could consolidate all Deployments and Services behind a single Reverse Proxy, but then we’d have to manually adjust its config each time we add or remove something which would prove more than just a bit unwieldy.
Luckily, Kubernetes provides just the means to address this: Ingress resources managed by Ingress Controllers allow to abstract away this tedious task, as we will see later on in the course of these lectures.
So, here and now we can just delete the ad-hoc-created resources:
kubectl delete deployment,service reverseproxy
kubectl delete configmap nginx-config
Milestone: K8S/SERVICES/REVERSEPROXY-RM