Labels and Selectors
As you may have noticed we used a new attribute in our service declaration called selector. By specifying a selector pointing to the label app: todoui we told the service which deployments he should route to. In general labels can be used to attach custom annotations and meta information to Kubernetes resources and to group and filter your objects based on your own organizational structures.
Exercise - filter resources using labels
Let’s see how we can use labels to filter the output of kubectl commands. The following statement will list every Kubernetes object labeled with app=todoui:
kubectl get all -l app=todoui
Tip
By applying the ‘-l’ flag to kubectl commands you can filter by label!
Your output should look like this:
NAME READY STATUS RESTARTS AGE
pod/todoui-6ff66fdfc9-zpkm6 1/1 Running 0 8m18s
NAME DESIRED CURRENT READY AGE
replicaset.apps/todoui-6ff66fdfc9 1 1 1 8m18sExercise - introduce your own labels
Now that we know how we can use labels to filter our resources let’s create our own labels! To have a better overview over our infrastructure we’d like to divide our applications in three tiers: ‘frontend’, ‘backend’ and ‘database’.
First we edit the deployment for our frontend and introduce a new label called tier. The todoui deployment belongs to the frontend so we set the tier to ‘frontend’:
nano todoui.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: todoui
spec:
replicas: 1
selector:
matchLabels:
app: todoui
template:
metadata:
name: todoui
labels:
app: todoui
# add this line
tier: frontend
spec:
containers:
- name: todoui
image: novatec/technologyconsulting-containerexerciseapp-todoui:v0.1
restartPolicy: AlwaysNow go ahead and add the labels to the todobackend yourself!
Apply the changed manifests with kubectl apply -f todoui.yaml && kubectl apply -f todobackend.yaml.
Milestone: K8S/LABELS/ADD
You should now be able to filter your kubernetes resources by tiers. Try running the following commands and see what happens:
kubectl get pods -l tier=frontend
kubectl get pods -l tier=backend
kubectl get pods -l tier=database
Exercise - advanced filtering with set-based requirement
The label API also allows you to do more complex queries called set-based label requirements. These queries enable you to use three kinds of operators: in, notin and exists.
Try any of the following commands and see what they do:
kubectl get pods -l 'tier in (frontend, backend)'
kubectl get pods -l 'tier notin (backend)'
Exercise - set-based requirements for log aggregation
When we deployed our applications for the first time we already learned how to view the logs of a pod by using
kubectl logs. Now that we have multiple pods in different deployments it might be useful to aggregate the logs and
view them in one place. This is where set-based requirements come in handy as we can use them to unify multiple logs in
one command.
Try the following command and see how the logs are aggregated in real-time!
kubectl logs -f -l 'tier in (backend, database)'
Tip
The ‘-f’ flag stands for ‘follow’ and will tail the output of kubectl logs!
Tip
You don’t want to guess which Pod has produced which log lines? Try appending --prefix to the command which indeed
will prefix each log line with the log source (pod name and container name), and possibly also --timestamps which will
include timestamps on each line in the log output, i.e.
kubectl logs -f -l 'tier in (backend, database)' --prefix --timestamps
And for a full list of logging options try checking the command help, as usual: kubectl logs --help.
Label selectors
Labels are not only used to filter kubectl outputs. They are also used by Services and ReplicaSets to define the entities they should manage. Take our previously defined todoui service as an example:
apiVersion: v1
kind: Service
metadata:
name: todoui
spec:
type: LoadBalancer
ports:
- port: 8090
selector:
app: todouiThe selector on this service declaration is pointing to the ‘app’ label of our todoui deployment. Without the selector the service wouldn’t know which pods to route to.