Labels and Selectors

As you may have noticed we used a new attribute in our service declaration called selector. By specifying a selector pointing to the label app: todoui we told the service which deployments he should route to. In general labels can be used to attach custom annotations and meta information to Kubernetes resources and to group and filter your objects based on your own organizational structures.

Exercise - filter resources using labels

Let’s see how we can use labels to filter the output of kubectl commands. The following statement will list every Kubernetes object labeled with app=todoui:

kubectl get all -l app=todoui

Tip

By applying the ‘-l’ flag to kubectl commands you can filter by label!

Your output should look like this:

NAME                          READY   STATUS    RESTARTS   AGE
pod/todoui-6ff66fdfc9-zpkm6   1/1     Running   0          8m18s

NAME                                DESIRED   CURRENT   READY   AGE
replicaset.apps/todoui-6ff66fdfc9   1         1         1       8m18s

Exercise - introduce your own labels

Now that we know how we can use labels to filter our resources let’s create our own labels! To have a better overview over our infrastructure we’d like to divide our applications in three tiers: ‘frontend’, ‘backend’ and ‘database’.

First we edit the deployment for our frontend and introduce a new label called tier. The todoui deployment belongs to the frontend so we set the tier to ‘frontend’:

nano todoui.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: todoui
spec:
  replicas: 1
  selector:
    matchLabels:
      app: todoui
  template:
    metadata:
      name: todoui
      labels:
        app: todoui
        # add this line
        tier: frontend
    spec:
      containers:
      - name: todoui
        image: novatec/technologyconsulting-containerexerciseapp-todoui:v0.1
      restartPolicy: Always

Now go ahead and add the labels to the todobackend yourself!

Solution

We edit the todobackend deployment and add the line ’tier: backend':

nano todobackend.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: todobackend
spec:
  replicas: 1
  selector:
    matchLabels:
      app: todobackend
  template:
    metadata:
      name: todobackend
      labels:
        app: todobackend
        # add this line
        tier: backend
    spec:
      containers:
        - name: todobackend
          env:
            - name: SPRING_PROFILES_ACTIVE
              value: prod
            - name: POSTGRES_HOST
              value: postgresdb
          image: novatec/technologyconsulting-containerexerciseapp-todobackend:v0.1
      restartPolicy: Always

There is no need to edit our postgresdb and add the label ’tier: database’ as that one had already been present right from the beginning. You will later see what the reasons for that are.

Apply the changed manifests with kubectl apply -f todoui.yaml && kubectl apply -f todobackend.yaml.

Milestone: K8S/LABELS/ADD

You should now be able to filter your kubernetes resources by tiers. Try running the following commands and see what happens:

kubectl get pods -l tier=frontend

kubectl get pods -l tier=backend

kubectl get pods -l tier=database

Solution
$ kubectl get pods -l tier=frontend
NAME                      READY   STATUS    RESTARTS   AGE
todoui-6767f8695c-glcmn   1/1     Running   0          10s
$ kubectl get pods -l tier=backend
NAME                           READY   STATUS    RESTARTS   AGE
todobackend-67fd9b6c69-qqs7p   1/1     Running   0          18s
$ kubectl get pods -l tier=database
NAME                          READY   STATUS    RESTARTS   AGE
postgresdb-7b79787498-spm7p   1/1     Running   0          21m

Exercise - advanced filtering with set-based requirement

The label API also allows you to do more complex queries called set-based label requirements. These queries enable you to use three kinds of operators: in, notin and exists.

Try any of the following commands and see what they do:

kubectl get pods -l 'tier in (frontend, backend)'

kubectl get pods -l 'tier notin (backend)'

Solution
# this will list every pod that is either labeled frontend OR backend.
$ kubectl get pods -l 'tier in (frontend, backend)'
NAME                           READY   STATUS    RESTARTS   AGE
todobackend-67fd9b6c69-qqs7p   1/1     Running   0          62s
todoui-6767f8695c-glcmn        1/1     Running   0          13m
# this will list every pod that is NOT labeled with backend.
$ kubectl get pods -l 'tier notin (backend)'
NAME                          READY   STATUS    RESTARTS   AGE
postgresdb-6c9bd7c5d8-kd4lw   1/1     Running   0          22m
todoui-6767f8695c-glcmn       1/1     Running   0          13m

Exercise - set-based requirements for log aggregation

When we deployed our applications for the first time we already learned how to view the logs of a pod by using kubectl logs. Now that we have multiple pods in different deployments it might be useful to aggregate the logs and view them in one place. This is where set-based requirements come in handy as we can use them to unify multiple logs in one command.

Try the following command and see how the logs are aggregated in real-time!

kubectl logs -f -l 'tier in (backend, database)'

Tip

The ‘-f’ flag stands for ‘follow’ and will tail the output of kubectl logs!

Tip

You don’t want to guess which Pod has produced which log lines? Try appending --prefix to the command which indeed will prefix each log line with the log source (pod name and container name), and possibly also --timestamps which will include timestamps on each line in the log output, i.e.

kubectl logs -f -l 'tier in (backend, database)' --prefix --timestamps

And for a full list of logging options try checking the command help, as usual: kubectl logs --help.

Label selectors

Labels are not only used to filter kubectl outputs. They are also used by Services and ReplicaSets to define the entities they should manage. Take our previously defined todoui service as an example:

apiVersion: v1
  kind: Service
  metadata:
    name: todoui
  spec:
    type: LoadBalancer
    ports:
      - port: 8090
    selector:
      app: todoui

The selector on this service declaration is pointing to the ‘app’ label of our todoui deployment. Without the selector the service wouldn’t know which pods to route to.