1. Introduction
When managing Kubernetes clusters, it’s crucial to understand pod scheduling and how pods run on various nodes. Taints are a fundamental part of this process, providing a mechanism to repel certain pods from nodes unless we explicitly tolerate those pods.
In this tutorial, we’ll explore the process of setting, listing, and removing taints across Kubernetes nodes to enhance our operational capabilities in a Kubernetes environment.
Notably, we’ll examine several methods using kubectl, from basic to advanced techniques, to retrieve detailed information about node taints. Let’s get started!
2. Understanding Kubernetes Taints
In Kubernetes, taints are properties of nodes that repel pods that don’t tolerate those taints. They consist of a key, value, and effect.
The effect tells the scheduler what to do with pods that don’t tolerate the taint: typically, either avoid scheduling them on the node (NoSchedule), schedule them only if there are no other options (PreferNoSchedule), or evict them if they are already running (NoExecute).
Practically, we use these three configurations to ensure that nodes don’t accept certain pods unless those pods explicitly tolerate the taints applied to those nodes. This can be useful in several scenarios, such as dedicating certain nodes to specific functions like GPU-based processing tasks or isolating nodes for security-sensitive applications.
By using taints, we can better control the workload distribution across the cluster, ensuring optimal resource utilization and compliance with operational policies. They also provide us with the visibility and control needed for efficient cluster management.
2.1. Setting a Taint
Before we can list or manage taints, first, we need to understand how to set them on Kubernetes nodes. We can apply taints to nodes to control which pods can schedule onto them based on the presence of corresponding tolerations in pods.
To set a taint, we use the kubectl taint command:
$ kubectl taint nodes [NODE_NAME] [KEY]=[VALUE]:[EFFECT]
Let’s better understand the syntax:
- [NODE_NAME] – name of the node to apply the taint
- [KEY] – a free-form string that acts as the identifier for the taint
- [VALUE] – associated with the key, which we can use to specify the taint further
*The [EFFECT], which is the effect of the taint, can be one of three options:*
- NoSchedule – Pods that don’t tolerate this taint aren’t scheduled on the node.
- PreferNoSchedule – The system will try to avoid placing a pod that doesn’t tolerate this taint on the node, but this isn’t guaranteed.
- NoExecute – Pods that don’t tolerate this taint are evicted if they’re already running on the node and aren’t scheduled to run on it in the future.
Putting all these together, let’s see a quick example:
$ kubectl taint nodes node1 app=blue:NoSchedule
node/node1 tainted
In this example, our output indicates the application of the taint app=blue:NoSchedule to node1. This means that any pod that doesn’t have a toleration matching app=blue won’t be scheduled on node1. This ensures that only specific pods prepared to handle certain configurations or workloads are available for scheduling on this node.
To better understand taints in Kubernetes, let’s see where we can practically put them to use.
2.2. Dedicated Nodes
In Kubernetes, dedicating nodes to specific workloads through taints ensures that only pods with matching tolerations are up for scheduling on these nodes. This strategy is particularly useful when certain workloads require specialized hardware, such as GPUs for computational tasks or SSDs for high-performance databases.
For instance, we can apply a taint to nodes with GPUs to ensure that only machine learning or video processing applications run on them, thereby maximizing the utilization of expensive resources.
Additionally, taints can help enforce licensing or compliance requirements by ensuring that certain applications only run on certified or compliant infrastructure. This can be critical in regulated industries where data handling and processing must meet strict standards.
2.3. Maintenance and Updates
Using taints during maintenance or upgrades is an effective strategy for managing node availability without disrupting the overall cluster performance.
By applying a NoSchedule effect, the node is marked as unavailable for new pods while allowing existing pods to continue running. This is crucial during software updates or hardware upgrades, as it ensures that the node’s workload isn’t suddenly increased by new pods, which could compromise the maintenance operations.
Moreover, for more severe maintenance that requires node downtime (such as hardware replacements), we can use the NoExecute taint, which evicts existing pods and prevents the scheduling of new ones. This allows for a clean environment where we can perform maintenance without interference from running workloads.
2.4. Emergency Isolation
In the event of a security breach or critical failure, quickly isolating a node is vital to minimizing the impact on the rest of the Kubernetes cluster.
By applying an emergency taint with the NoExecute effect, all non-tolerating pods are immediately put up for eviction, and the node is prevented from accepting new pods. This rapid isolation helps to contain the breach or failure while we investigate and resolve the issue.
Furthermore, we can utilize this approach to be part of a broader disaster recovery strategy, ensuring that affected nodes are quarantined and replaced without compromising the integrity of the entire cluster. Emergency taints act as a first line of defense, giving us the time to perform thorough analyses and remediations.
In short, understanding and implementing these taint mechanisms can enhance cluster resilience, optimize resource utilization, and maintain strict control over pod placement and node usage.
3. Using kubectl to List Node Taints
kubectl remains our go-to command-line tool for interacting with Kubernetes and managing its resources, including nodes. It provides a powerful set of commands for retrieving detailed information about the various resources, one of which includes node taints.
Let’s see several ways to use kubectl to extract detailed taint information from our Kubernetes nodes.
4. The kubectl describe nodes Command
One of the simplest ways to view the taints applied to nodes is through the kubectl describe nodes command. It provides a verbose output that includes a lot of information about each node, such as its status, capacities, and assigned taints:
$ kubectl describe nodes node-name
...
Taints: node-role.kubernetes.io/master:NoSchedule
...
This fast method is especially useful for quick checks or a comprehensive view of a specific node’s configuration and status.
5. Advanced kubectl Queries
For environments where automation and scripting are essential, or when we need to process node and taint data programmatically, kubectl offers more tailored output options.
5.1. JSON and JSONPath Output
For more detailed and customizable output, we can use the JSON output format combined with JSONPath queries. This method is highly flexible and allows us to extract almost any data we need from the Kubernetes API responses.
Let’s see an example:
$ kubectl get nodes -o jsonpath="{range .items[*]}{.metadata.name}:{' '}{range .spec.taints[*]}{.key}={.value}:{.effect},{' '}{end}{'\n'}{end}"
node1: app=blue:NoSchedule,
node2: gpu=exclusive:NoSchedule,
node3:
As we can see in our output, node1 has a taint with the key app, the value blue, and the effect NoSchedule. Also, node2 has a taint with the key gpu, the value exclusive, and the effect NoSchedule. node3 doesn’t have any taints.
Notably, this JSONPath command is particularly useful for scripting and automation purposes in complex Kubernetes environments. It allows automation tools to parse node and taint data programmatically, facilitating dynamic adjustments in scripts or integration with other management tools.
For instance, an automated script could use this output to generate alerts or reports about nodes exclusively reserved for specific tasks or to ensure compliance with organizational policies on workload isolation.
5.2. Using JSON and jq
If we want to show all the components of each taint (key, value, and effect) in a single, readable line per taint, we can use jq, a command-line JSON processor, with the -r option.
Specifically, this method invokes jq to process the JSON output, while the -r option outputs raw strings, omitting the double quotes that usually surround jq output strings:
$ kubectl get nodes -o=json | jq -r '.items[] | .metadata.name + "\t" + (if .spec.taints then (.spec.taints | map(.key + "=" + (.value // "") + ":" + .effect) | join(", ")) else "No taints" end)'
node1 app=blue:NoSchedule, gpu=exclusive:PreferNoSchedule
node2 No taints
node3 storage=high:NoExecute
Here, we use jq to extract and format the node names and taints. It efficiently handles cases where nodes have no taints by outputting “No taints”.
5.3. Listing All Taints Across All Nodes
We can also get a quick overview of all taints across all nodes in our cluster:
$ kubectl get nodes -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.taints[*].key}{"\n"}{end}'
node1 app
node2 gpu
node3
Here, we list all nodes in the cluster and the keys of any taints applied to them. The keys of the taints follow each node’s name.
5.4. Filtering Nodes by Specific Taints
When we need to find nodes marked with specific taints, especially in larger clusters, we can filter the list:
$ kubectl get nodes -o go-template='{{range $item := .items}}{{with $nodename := $item.metadata.name}}{{range $taint := $item.spec.taints}}{{if and (eq $taint.key "node-role.kubernetes.io/master") (eq $taint.effect "NoSchedule")}}{{printf "%s\n" $nodename}}{{end}}{{end}}{{end}}{{end}}'
master-node
In this example, we filter and list nodes with a specific taint applied — in this case, node-role.kubernetes.io/master with an effect of NoSchedule. Our output master-node indicates that this node is tainted as a master node where regular pods shouldn’t be scheduled. This is common in Kubernetes setups to isolate master nodes from regular workload nodes.
Also, filtering is crucial for operational tasks where we need to quickly identify or isolate certain node roles for updates and maintenance.
5.5. Custom Queries for Complex Environments
Furthermore, in environments with large Kubernetes clusters and hundreds of nodes with various taints, creating custom queries can significantly streamline management tasks.
For instance, we can generate a detailed report that includes various node information:
$ kubectl get nodes -o custom-columns=NAME:.metadata.name,ARCH:.status.nodeInfo.architecture,KERNEL:.status.nodeInfo.kernelVersion,TAINTS:.spec.taints
NAME ARCH KERNEL TAINTS
node1 amd64 4.19.0-6-amd64 [{"key":"app","value":"blue","effect":"NoSchedule"}]
node2 amd64 4.19.0-6-amd64 [{"key":"gpu","value":"exclusive","effect":"NoSchedule"}]
node3 amd64 4.19.0-6-amd64 []
Here, our detailed report includes each node’s name, architecture, kernel version, and any taints applied. It’s especially valuable for audits and compliance tracking.
5.6. Using Go Templates for Custom Outputs
Lastly, if we need complete control over the output format, kubectl supports Go templates. This feature is powerful for crafting exactly the output we need.
Let’s see how we can list nodes and their taints using a Go template:
$ kubectl get nodes -o go-template='{{range .items}}{{.metadata.name}}{{"\t"}}{{range .spec.taints}}{{.key}}={{.value}}:{{.effect}}{{"\t"}}{{end}}{{"\n"}}{{end}}'
master-11 node-role.kubernetes.io/master=NoSchedule
master-12 node-role.kubernetes.io/master=NoSchedule
master-13 node-role.kubernetes.io/master=NoSchedule
In this example, we use the Go template to provide a neatly formatted output, showing each node’s taints in a clear, easy-to-read format.
6. Balancing Taints Across a Multi-Tenant Cluster
Maintaining resource balance while ensuring isolation between different tenants is a crucial and often complex task in multi-tenant Kubernetes environments.
Fortunately, in such an environment, we can utilize taints to manage this isolation by preventing certain tenants’ workloads from running on specific nodes. This helps assess and manage the distribution and utilization of resources across a multi-tenant cluster.
For example, we can list nodes along with specific taints and resource allocations:
$ kubectl get nodes -o custom-columns='NAME:.metadata.name,TAINTS:.spec.taints,ALLOCATABLE_CPU:.status.allocatable.cpu,ALLOCATABLE_MEM:.status.allocatable.memory'
NAME TAINTS ALLOCATABLE_CPU ALLOCATABLE_MEM
node1 [{"key":"tenantA","effect":"NoSchedule"}] 8 32Gi
node2 [{"key":"tenantB","effect":"NoSchedule"}] 16 64Gi
node3 [] 4 16Gi
Here, we customize our output columns to show the name of each node, any taints applied, and the allocatable CPU and memory resources.
TAINTS shows an array of taints applied to each node. For example, node1 has a taint with a key of tenantA and an effect of NoSchedule, meaning that only pods that tolerate this taint can be scheduled on node1. We can use this to ensure that only workloads belonging to tenantA deploy on this node, providing isolation from other tenants.
Also, ALLOCATABLE_CPU and ALLOCATABLE_MEM display the amount of CPU and memory resources allocatable on each node, which helps us understand the distribution of resources across the cluster.
Importantly, in a complex multi-tenant environment, balancing resources goes beyond simply allocating CPU and memory. Network bandwidth, storage IOPS, and other compute resources like GPUs also need consideration. To manage these aspects effectively, we can employ resource quotas and limits with taints and tolerations.
7. Automating Taint Adjustments
Automating the management of taints in Kubernetes can be crucial for dynamically responding to changes in workload or node conditions.
Let’s see a quick Bash script that automates this process and removes taints based on CPU usage:
#!/bin/bash
# Node name to check
NODE_NAME="node1"
# Check current CPU usage and apply/remove taint accordingly
if [ $(kubectl top node $NODE_NAME --no-headers | awk '{print $2}' | sed 's/%//') -gt 90 ]; then
kubectl taint nodes $NODE_NAME high_cpu_usage:NoSchedule --overwrite
else
kubectl taint nodes $NODE_NAME high_cpu_usage:NoSchedule-
fi
Since CPU usage is a common indicator of node stress that might necessitate rebalancing the workload, the Bash script uses kubectl top node $NODE_NAME –no-headers to fetch the current CPU usage of the specified node ($NODE_NAME). Then, it outputs several columns of data, including CPU usage, which we extract using awk ‘{print $2}’. This represents the percentage of CPU currently in use by the node.
Conditionally, if the CPU usage exceeds 90%, the script applies a taint to the node with kubectl…–overwrite. The –overwrite flag ensures it replaces any existing taint with the same key.
Otherwise, if the CPU usage is 90% or less, the script removes the taint with kubectl …:NoSchedule-. This allows the scheduling of pods on the node again.
Let’s save the script as taint-nodes.sh, make it executable with chmod +x taint-nodes.sh, and then run it:
$ ./taint-nodes.sh
node/node1 tainted
We can see that the taint high_cpu_usage:NoSchedule was successfully applied to node1 due to high CPU usage. This will prevent new pods that don’t tolerate this taint from being scheduled on the node.
On the other hand, if the CPU usage is normal, we’ll get a different result:
$ ./taint-nodes.sh
node/node1 untainted
The output shows that the taint high_cpu_usage:NoSchedule was successfully removed from node1.
Notably, we can automate the execution of the script using a cron job and redirect the output of the script to a file for logging purposes:
# crontab -e
*/5 * * * * /home/user/taint-nodes.sh >> /var/log/taint-nodes.log 2>&1
This setup appends the standard output and standard error from our script to /var/log/taint-nodes.log, allowing us to review what happens each time the cron job runs.
8. Removing Taints
Removing taints from a node in Kubernetes is essentially the reverse application process.
If we decide that a node should no longer repel certain pods, we can remove the taint using the kubectl taint command with a minus sign (–) after the taint key:
$ kubectl taint nodes [NODE_NAME] [KEY]-[EFFECT]-
For instance, if we previously set a taint with key=value:effect, we can remove it by appending the minus sign:
$ kubectl taint nodes node1 key1=value1:NoSchedule-
Here, we remove the taint key1=value1:NoSchedule from node1.
However, if the key has no value or effect specified, we use the key with a minus:
$ kubectl taint nodes node1 key1-
This effectively untaints the node regarding key1, allowing pods that don’t tolerate this taint to be scheduled on the node again. It’s a straightforward but powerful way to dynamically adjust our cluster’s scheduling behavior.
9. Conclusion
In this article, we explored various methods for setting, listing, managing, and removing taints on Kubernetes nodes using the kubectl command-line tool.
From basic commands suitable for quick lookups to advanced queries and custom outputs for detailed reporting, kubectl offers the flexibility to handle almost any node management scenario in a Kubernetes environment.