Introduction
Kubernetes, the popular container orchestration platform, provides a powerful feature called "taints" to control the scheduling of pods on nodes. In this tutorial, we will explore how to view taints applied to Kubernetes nodes, which is an essential skill for effectively managing your Kubernetes cluster.
Taints allow you to mark nodes with specific attributes that can repel certain pods, ensuring workloads are scheduled appropriately based on node capabilities and resources. Understanding how to view and work with taints helps you maintain optimal resource allocation in your Kubernetes environment.
Setting Up a Kubernetes Environment for Testing
Before we can view taints on Kubernetes nodes, we need a functioning Kubernetes environment. For this tutorial, we will use Minikube, which provides a lightweight, local Kubernetes cluster for development and testing purposes.
Let's start by installing Minikube:
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
rm minikube-linux-amd64
Now that Minikube is installed, let's start a Kubernetes cluster:
minikube start --driver=docker
You should see output similar to this:
😄 minikube v1.29.0 on Ubuntu 22.04
✨ Using the docker driver based on user configuration
📌 Using Docker driver with root privileges
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
🔥 Creating docker container (CPUs=2, Memory=2200MB) ...
🐳 Preparing Kubernetes v1.26.1 on Docker 23.0.1 ...
▪ Generating certificates and keys ...
▪ Booting up control plane ...
▪ Configuring RBAC rules ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: default-storageclass, storage-provisioner
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
Let's verify that the cluster is running by checking the node status:
kubectl get nodes
You should see output similar to this:
NAME STATUS ROLES AGE VERSION
minikube Ready control-plane 1m v1.26.1
Great! We now have a functioning Kubernetes environment to explore taints. The kubectl command is already configured to work with our Minikube cluster.
Understanding Kubernetes Taints
Before we start viewing taints, let's understand what they are and how they work in Kubernetes.
What are Taints?
Taints are properties applied to Kubernetes nodes that allow a node to repel certain pods. Think of taints as labels that mark nodes as unsuitable for specific types of workloads.
Taints work together with a concept called "tolerations." While taints are applied to nodes, tolerations are applied to pods. A pod with a toleration matching a node's taint can be scheduled on that tainted node.
Taint Structure
Taints consist of three components:
- Key: A string that identifies the taint (e.g.,
gpu,disk,network) - Value: An optional string assigned to the key (e.g.,
true,high-performance) - Effect: Defines how pods without matching tolerations are treated
The most common taint effects are:
NoSchedule: New pods without matching tolerations will not be scheduled on the nodePreferNoSchedule: The system will try to avoid placing pods without matching tolerations on the node, but it's not guaranteedNoExecute: New pods without matching tolerations won't be scheduled on the node, and existing pods without matching tolerations will be evicted
Here's the syntax for a taint:
- With a value:
key=value:effect - Without a value:
key:effect
Some nodes in a Kubernetes cluster have default taints. For example, control plane nodes are often tainted with node-role.kubernetes.io/control-plane:NoSchedule to prevent regular workloads from being scheduled on them, preserving resources for system components.
Let's examine our Minikube node to see if it has any default taints:
kubectl describe node minikube | grep -A3 Taints
You'll likely see output similar to:
Taints: node-role.kubernetes.io/control-plane:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: minikube
This output shows that our Minikube node has a taint that prevents regular pods from being scheduled on it, as it's a control plane node.
Viewing Taints on Kubernetes Nodes
Now that we understand what taints are, let's explore the different methods to view taints applied to Kubernetes nodes.
Method 1: Using kubectl describe
The most detailed way to view taints on a node is using the kubectl describe node command:
kubectl describe node minikube
This command outputs comprehensive information about the node. To focus only on taints, you can use grep:
kubectl describe node minikube | grep -A1 Taints
Example output:
Taints: node-role.kubernetes.io/control-plane:NoSchedule
Unschedulable: false
Method 2: Using kubectl get with custom-columns
You can use the kubectl get nodes command with custom output columns to display only the taints:
kubectl get nodes -o custom-columns=NAME:.metadata.name,TAINTS:.spec.taints
Example output:
NAME TAINTS
minikube [map[effect:NoSchedule key:node-role.kubernetes.io/control-plane]]
Method 3: Using kubectl get with JSONPath
Another approach is to use JSONPath to extract taint information:
kubectl get nodes minikube -o jsonpath='{.spec.taints}'
Example output:
[{"effect":"NoSchedule","key":"node-role.kubernetes.io/control-plane"}]
For better readability, you can format the output as JSON:
kubectl get nodes minikube -o jsonpath='{.spec.taints}' | jq .
If you don't have jq installed, you can install it with:
sudo apt-get update && sudo apt-get install -y jq
Example formatted output:
[
{
"effect": "NoSchedule",
"key": "node-role.kubernetes.io/control-plane"
}
]
Method 4: Using kubectl get with YAML output
You can also view the complete node specification in YAML format and search for taints:
kubectl get node minikube -o yaml | grep -A5 taints:
Example output:
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
unschedulable: false
status:
addresses:
Each of these methods provides the same information in different formats. Choose the one that best suits your needs based on readability and how you plan to use the information.
Adding and Removing Taints
Now that we know how to view taints, let's learn how to add and remove them. This is a common operation when you need to control pod scheduling or prepare nodes for maintenance.
Adding Taints to Nodes
The syntax for adding a taint to a node is:
kubectl taint nodes <node-name> <key>=<value>:<effect>
Let's add a taint to our Minikube node to mark it as having a GPU:
kubectl taint nodes minikube gpu=true:NoSchedule
You should see output like:
node/minikube tainted
Now, let's verify that the taint was added:
kubectl describe node minikube | grep -A3 Taints
Example output:
Taints: gpu=true:NoSchedule
node-role.kubernetes.io/control-plane:NoSchedule
Unschedulable: false
Lease:
As you can see, our node now has two taints: the original control-plane taint and our new GPU taint.
Removing Taints from Nodes
To remove a taint, you append a minus sign (-) to the same taint definition:
kubectl taint nodes <node-name> <key>=<value>:<effect>-
Let's remove the GPU taint we just added:
kubectl taint nodes minikube gpu=true:NoSchedule-
You should see output like:
node/minikube untainted
Let's verify that the taint was removed:
kubectl describe node minikube | grep -A3 Taints
Example output:
Taints: node-role.kubernetes.io/control-plane:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: minikube
Now our node is back to having only the control-plane taint.
When to Use Taints
Taints are particularly useful in several scenarios:
- Specialized hardware: Tainting nodes with special hardware (like GPUs) to ensure only workloads requiring that hardware are scheduled there
- Node maintenance: Adding a taint before performing maintenance to prevent new pods from being scheduled
- Security isolation: Keeping certain workloads separate from others for security reasons
- Resource optimization: Dedicating nodes to specific workload types for optimal resource utilization
By understanding how to view, add, and remove taints, you have gained fundamental knowledge for managing pod scheduling in your Kubernetes cluster.
Working with Tolerations
Now that we understand how taints work, let's explore tolerations - the mechanism that allows pods to be scheduled on tainted nodes.
Understanding Tolerations
Tolerations are specified in pod specifications and allow pods to be scheduled on nodes with matching taints. A toleration consists of:
key: Matches the taint keyoperator: EitherEqual(matches key and value) orExists(matches just the key)value: The value to match (when using theEqualoperator)effect: The effect to match, or empty to match all effectstolerationSeconds: Optional duration for which the pod can remain on a node with a matching NoExecute taint
Creating a Pod with Tolerations
Let's create a pod that tolerates our control-plane taint. First, let's create a YAML file for our pod:
nano ~/project/toleration-pod.yaml
Now, add the following content to the file:
apiVersion: v1
kind: Pod
metadata:
name: toleration-pod
spec:
containers:
- name: nginx
image: nginx:latest
tolerations:
- key: "node-role.kubernetes.io/control-plane"
operator: "Exists"
effect: "NoSchedule"
This pod specification includes a toleration that matches the control-plane taint on our node. Save and exit the file (in nano, press Ctrl+O, Enter, then Ctrl+X).
Now, let's create the pod:
kubectl apply -f ~/project/toleration-pod.yaml
You should see output like:
pod/toleration-pod created
Let's check if the pod was scheduled on our node:
kubectl get pods -o wide
Example output:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
toleration-pod 1/1 Running 0 12s 10.244.0.5 minikube <none> <none>
The pod is running on our minikube node because it has a toleration matching the control-plane taint.
Testing with a Pod without Tolerations
For comparison, let's create a pod without tolerations:
nano ~/project/no-toleration-pod.yaml
Add the following content:
apiVersion: v1
kind: Pod
metadata:
name: no-toleration-pod
spec:
containers:
- name: nginx
image: nginx:latest
Save and exit the file, then create the pod:
kubectl apply -f ~/project/no-toleration-pod.yaml
Now, let's check the pod status:
kubectl get pods -o wide
You might notice that the pod remains in a Pending state:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
no-toleration-pod 0/1 Pending 0 12s <none> <none> <none> <none>
toleration-pod 1/1 Running 0 2m 10.244.0.5 minikube <none> <none>
Let's check why the pod is pending:
kubectl describe pod no-toleration-pod
In the events section, you should see something like:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 45s default-scheduler 0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
This confirms that the pod could not be scheduled because it doesn't tolerate the control-plane taint.
Cleanup
Let's clean up the pods we created:
kubectl delete pod toleration-pod no-toleration-pod
You should see:
pod "toleration-pod" deleted
pod "no-toleration-pod" deleted
Congratulations! You now understand how taints and tolerations work together to control pod scheduling in Kubernetes.
Summary
In this hands-on lab, you learned how to work with Kubernetes taints and tolerations, key features for controlling pod scheduling in your cluster. Here's what you accomplished:
- Set up a Kubernetes environment using Minikube
- Understood the concept of taints and their effects on pod scheduling
- Explored different methods to view taints on Kubernetes nodes
- Added and removed taints from nodes using kubectl commands
- Created pods with and without tolerations to see how they interact with tainted nodes
These skills are essential for managing workload placement and resource allocation in Kubernetes clusters. By properly using taints and tolerations, you can ensure that pods are scheduled on appropriate nodes based on hardware requirements, workload characteristics, and resource constraints.
As you continue your Kubernetes journey, you can build upon this knowledge to implement more sophisticated scheduling strategies, such as node affinity and anti-affinity, to further optimize your cluster resources.


