1. Introduction

Managing Kubernetes clusters and contexts can become complicated, especially when configurations become outdated or redundant. This makes it important to clean up the kubectl configurations.

In this tutorial, we’ll study the commands for deleting clusters and contexts from kubectl configurations. Firstly, we’ll review the main concepts involved in a Kubernetes environment. Then, we’ll see which commands are responsible for the deletions. Finally, we’ll apply them in a Kubernetes environment.

2. Understanding Clusters and Contexts in kubectl

kubectl is the command-line tool for interacting with Kubernetes clusters. It allows various operations, from creating pods to checking cluster status. It uses a configuration file, typically located at ~/.kube/config, to store information about clusters, contexts, and more. To ensure that kubectl operates correctly, we must have a properly configured cluster.

In Kubernetes environments, the cluster is the system’s foundation, providing the essential infrastructure for container orchestration. As such, it’s the primary organizational unit of Kubernetes.

Basically, it’s a set of nodes that run containers. One key component is the control plane, which orchestrates workload distribution and ensures the execution of the applications. Each cluster described in the kubectl configuration file is defined by a cluster entry, which includes the API server address and other necessary credentials.

The figure below, from the Kubernetes documentation, illustrates the complete architecture of a cluster:
kubernetes cluster architecture

In these scenarios, we often need to switch between different working environments, such as development, staging, and production. To simplify this process, we can create a context.

A Kubernetes context is a configuration that groups parameters to facilitate interaction with different clusters. It encapsulates three key pieces of information: the cluster to access, the user performing operations, and the namespace for those actions. Defining and using contexts allows us to quickly switch working environments without repeatedly specifying connection and authentication details.

3. How to Delete Clusters and Contexts

Two commands can be used to remove settings from kubectl. In cases where we want to remove or only adjust attributes within configuration file entries, we can use unset. This allows us to remove individual values without deleting the entire entry.

So, to remove a specific context, we use:

$ kubectl config unset contexts.<name-context>

Now, to remove a single cluster:

$ kubectl config unset clusters.<name-cluster>

Note that the default command is kubectl config unset followed by the attribute and its respective key. This method permits adjustments or clearing of specific configuration parts without affecting related settings.

The second command is deleted. Use this command to remove all attribute entries from the kubectl configuration file. When we operate it, we remove all the information related to the given attribute, i.e., completely deleting a cluster or context.

Then, to delete a context, run:

$ kubectl config delete-context <name-context>

As for removing a cluster:

$ kubectl config delete-cluster <name-cluster>

Removing a context or cluster will cause any subsequent kubectl commands that depend on those settings to fail until a new valid context is set up.

4. Example

Now that we’ve studied the commands let’s apply them in a Kubernetes environment to visualize how they work. Firstly, let’s see which clusters and contexts are registered using kubectl:

$ kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://master-node:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
- context:
    cluster: kubernetes
    namespace: staging
    user: user-02
  name: staging
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: DATA+OMITTED
    client-key-data: DATA+OMITTED

In this case, there’s only one cluster named Kubernetes registered on the master-node host. Additionally, we’ve two registered contexts, both operating on the same cluster but with different users and namespaces.

In our current simple scenario, the configuration file contains minimal information. In more complex scenarios, over time, this file can become full of entries for clusters and contexts that are no longer in use. Therefore, it’s essential to clean up these configurations to maintain a proper workflow.

Firstly, let’s apply the unset command to remove the staging context:

$ kubectl config unset contexts.staging
Property "contexts.staging" unset.

Note that, as previously explained, the returned message indicates that only the specified property has been unset. The same happens when we remove a cluster:

$ kubectl config unset clusters.kubernetes
Property "clusters.kubernetes" unset.

Now, let’s actually delete the context and the cluster by using the delete command and analyze the difference:

$ kubectl config delete-context staging
deleted context staging from /home/user/.kube/config

$ kubectl config delete-cluster kubernetes
deleted cluster kubernetes from /home/user/.kube/config

Here, we’ve removed the complete entries from the configuration file, including the associated resources.

5. Conclusion

In this article, we studied two ways of removing clusters and contexts from kubectl: unset and delete. The choice between them depends on the final objective. The delete command is used for complete removal, while unset allows for a more detailed and specific removal of attributes.