Cordoning Nodes and Managing Pods
When a Kubernetes node is cordoned, it becomes unschedulable, meaning that new pods will not be placed on that node. However, the existing pods on the cordoned node will continue to run. In this section, we will explore how to manage pods on cordoned nodes and the implications of node cordoning.
Handling Pods on Cordoned Nodes
When a node is cordoned, the existing pods on that node will continue to run. However, if those pods need to be rescheduled or scaled, they will not be placed back on the cordoned node. Instead, Kubernetes will attempt to schedule the pods on other available nodes in the cluster.
If a pod needs to be evicted from a cordoned node, Kubernetes will gracefully terminate the pod and reschedule it on another available node. This process is known as "pod eviction." The pod eviction process ensures that the pod's state is preserved, and the application can continue running without interruption.
To view the pods running on a cordoned node, you can use the following command:
kubectl get pods --field-selector spec.nodeName=<cordoned-node-name>
This command will list all the pods that are currently running on the specified cordoned node.
Scheduling Pods on Cordoned Nodes
By default, Kubernetes will not schedule new pods on a cordoned node. However, in some cases, you may want to override this behavior and schedule specific pods on a cordoned node. This can be useful when you need to perform maintenance on a node but still want to keep certain critical workloads running on that node.
To schedule a pod on a cordoned node, you can set the tolerations
field in the pod's specification. Tolerations allow a pod to be scheduled on a node that has a matching taint. In the case of a cordoned node, the node has a "node.kubernetes.io/unschedulable" taint, which can be tolerated by the pod.
Here's an example of a pod specification that tolerates the "node.kubernetes.io/unschedulable" taint:
apiVersion: v1
kind: Pod
metadata:
name: my-critical-pod
spec:
tolerations:
- key: "node.kubernetes.io/unschedulable"
operator: "Exists"
containers:
- name: my-container
image: my-critical-app:v1
By adding the tolerations
field to the pod specification, you can ensure that the pod will be scheduled on the cordoned node, even though the node is marked as unschedulable.
Understanding how to manage pods on cordoned nodes is crucial for maintaining and optimizing your Kubernetes cluster during node maintenance or decommissioning.