Labels are the mechanism in Kubernetes through which system objects may be given an organisation structure. A label is a key-value pair with well defined restrictions concerning length and value used as an identifier and mapped onto system objects making them more meaningful within the system as a whole.
Whilst labels (and selectors) underpin how Pods are managed within a Deployment they can be used in other ways more conducive to the diagnosis of the containers that make up Pods.
When there is a problematic Pod running within a cluster it may not be desirable to destroy it without first understanding what went wrong. Instead the Pod should be removed from the load balancer and inspected whilst no longer serving traffic.
This can be accomplished through the use of labels and selectors within a
Kubernetes manifest files describing a Deployment. In particular a label such
as serving: true
may indicate to Kubernetes that a Pod should be placed
within the load balancer and should be serving traffic.
Note: The -L
option in the following command specifies the serving
label should be included in the table output by kubectl
.
For instance lets say the Pod named api-441436789-7qzv0
is returning an
increased number of 404
responses which has been identified as abnormal
behaviour. This Pod is a prime candidate for inspection, but doing so whilst it
is still serving traffic (and thus many 404
errors) is undesirable. Replacing
the Pod with a fresh one may solve the problem temporarily, but finding the
root cause of the issue should be the goal. Therefore to remove this Pod from
the load balancer it’s labels must be edited in place.
The Replication Controller backing the Deployment will spin up a new Pod to replace the one taken from the load balancer whilst the problematic Pod will remain active for inspection but will not be available to serve traffic.
Note: In the above snippet the endpoints that back the service have changed to reflect the replacement of the problematic Pod.
Now an interactive terminal session can be made with a container running in the Pod without affecting traffic.
When the problem has been diagnosed and possibly fixed the Pod can be returned to the load balancer or more likely just destroyed.