Clean-up
Delete the app deployments: (This will delete the deployments, including Pods and Replica Sets)
kubectl delete deployment sampleapp sampleapp-subpath
kubectl delete deployment postgresdb
kubectl delete deployment todobackend todobackend-v1
kubectl delete deployment todoui
kubectl get deployments
Delete the services associated with your app:
kubectl delete service sampleapp sampleapp-subpath
kubectl delete service postgresdb
kubectl delete service todobackend todobackend-v1
kubectl delete service todoui
kubectl delete services zk-cs zk-hs
kubectl get services
Delete the horizontal pod autoscaler associated with your app:
kubectl delete hpa todoui
Delete the secrets and the configmaps:
kubectl delete secret todo-nginx-tls-secret todo-traefik-tls-secret todo-backend-basic-auth
kubectl delete secret db-security
kubectl delete secrets postgres.exercises-minimal-cluster.credentials.postgresql.acid.zalan.do standby.exercises-minimal-cluster.credentials.postgresql.acid.zalan.do
kubectl delete configmap postgres-config
Delete the statefulset:
kubectl delete statefulset zk
kubectl get pods
Delete the Ingress and accompanying Middleware:
kubectl delete ingress sampleapp
kubectl delete ingress todo-nginx todo-nginx-backend-basic-auth
kubectl delete ingress sampleapp-traefik
kubectl delete ingress todo-traefik todo-traefik-redirect todo-traefik-backend-basic-auth
kubectl delete middleware stripprefix-subpath redirect-to-https basic-auth-backend stripprefixregex-backend
kubectl get ingress
Now if you check the status of the pv and pvc, you will notice that they still exist and you need to delete them manually:
kubectl get pv,pvc
kubectl delete pvc postgres-db-data
kubectl delete pv postgres-db-data
kubectl delete pvc datadir-zk-0 datadir-zk-1 datadir-zk-2
And let’s not forget that the actual PV need to be recycled manually at your cloud provider even though Kubernetes has already released them.
Milestone: K8S/CLEANUP
You could check that everything is cleaned up with
kubectl get all --all-namespaces
then you should only see the running kubernetes resources like below:
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx pod/ingress-nginx-controller-77f5884bdd-xxzcb 1/1 Running 0 8h
kube-system pod/coredns-869cb84759-s6v4d 1/1 Running 0 8h
kube-system pod/coredns-869cb84759-x8cjc 1/1 Running 0 8h
kube-system pod/coredns-autoscaler-5b867494f-765tm 1/1 Running 0 8h
kube-system pod/kube-proxy-29kbr 1/1 Running 0 8h
kube-system pod/kube-proxy-qntsm 1/1 Running 0 8h
kube-system pod/kube-proxy-s6pmb 1/1 Running 0 8h
kube-system pod/metrics-server-6cd7558856-g48ch 1/1 Running 0 8h
kube-system pod/tunnelfront-5b546fdd7f-gdcvq 2/2 Running 0 8h
traefik-v2 pod/traefik-79fb5db687-qlxz7 1/1 Running 0 8h
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 8h
ingress-nginx service/ingress-nginx-controller LoadBalancer 10.0.129.88 104.45.43.240 80:30146/TCP,443:30582/TCP 8h
ingress-nginx service/ingress-nginx-controller-admission ClusterIP 10.0.179.103 <none> 443/TCP 8h
kube-system service/kube-dns ClusterIP 10.0.0.10 <none> 53/UDP,53/TCP 8h
kube-system service/metrics-server ClusterIP 10.0.79.176 <none> 443/TCP 8h
traefik-v2 service/traefik LoadBalancer 10.0.162.34 51.137.215.154 80:31183/TCP,443:30548/TCP 8h(Beware, this won’t really display all information, cf. a previous note , but it will suffice to give a general overview.)
Thank you!