1. Introduction
Helm, the package manager for Kubernetes, has become an essential tool in the DevOps toolkit. It simplifies the process of deploying and managing applications on Kubernetes clusters.
However, like any powerful tool, Helm can sometimes throw errors that leave us scratching our heads. One such error is *Error: configmaps is forbidden: User “system:serviceaccount:kube-system:default” cannot list configmaps in the namespace “kube-system.*” when we run helm list.
In this tutorial, we’ll investigate the root cause of this error and explore various solutions to fix it. Let’s get started!
2. Understanding the Error
Before we jump into the solutions, let’s break down the error message.
The error message configmaps is forbidden: User “system:serviceaccount:kube-system:default” cannot list configmaps in the namespace “kube-system” indicates a permission issue. Kubernetes employs Role-Based Access Control (RBAC) to regulate access to resources within the cluster. Therefore, our careful understanding of RBAC is crucial here.
RBAC in Kubernetes allows us to control who can access what resources within our cluster. It’s a powerful security feature but can also cause permissions-related headaches if not configured correctly. RBAC defines Roles or ClusterRoles for cluster-wide permissions that specify a set of allowed actions on specific resources. These Roles are then bound to users, groups, or service accounts through RoleBindings or ClusterRoleBindings, effectively granting them the defined permissions.
By default, the service account used by Helm does not have sufficient permissions to list ConfigMaps in the kube-system namespace. This restriction is a security measure to ensure that service accounts do not have more privileges than necessary. When RBAC is enabled (which is the default in most Kubernetes clusters), every request to the API server is authenticated and then authorized based on the permissions granted to the requester.
3. Initial Troubleshooting Steps
Let’s ensure we’ve covered the basics and haven’t missed any prerequisites.
3.1. Verifying Helm and Kubernetes Versions
First, we should ensure we’re using compatible versions of Helm and Kubernetes. Running outdated versions can lead to compatibility issues.
Therefore, let’s verify our Helm and Kubernetes versions:
$ helm version
version.BuildInfo{
Version:"v3.13.3",
GitCommit:"3a31588ad33fe3b89af5a2a54ee1d25bfe6eaa5e",
GitTreeState:"clean",
GoVersion:"go1.20.5"
}
$ kubectl version
Client Version: version.Info{
Major:"1",
Minor:"28",
GitVersion:"v1.28.2",
GitCommit:"89a4ea3e1e4ddd7f7572286090859619aa2dc4ba",
GitTreeState:"clean",
BuildDate:"2023-09-13T09:35:49Z",
GoVersion:"go1.20.8",
Compiler:"gc",
Platform:"linux/amd64"
}
Server Version: version.Info{
Major:"1",
Minor:"28",
GitVersion:"v1.28.2",
GitCommit:"89a4ea3e1e4ddd7f7572286090859619aa2dc4ba",
GitTreeState:"clean",
BuildDate:"2023-09-13T09:29:07Z",
GoVersion:"go1.20.8",
Compiler:"gc",
Platform:"linux/amd64"
}
These commands show the Helm’s version and then the Kubernetes client and server versions (in this example). In our setup, we should verify that both are compatible with our Helm version.
3.2. Ensuring Proper Namespace Configuration
We also need to ensure proper namespace configuration. If we’re not using the default namespace, we should confirm that Helm is pointed to the correct namespace.
To do this, we can list Helm releases in the kube-system namespace:
$ helm list --namespace kube-system
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
kube-system-release kube-system 3 2023-06-30 14:00:00.000000 +0000 UTC deployed kube-system-chart-3.0.0 3.0.0
Our output includes details of each release in the kube-system namespace, showing the release name, namespace, revision, update timestamp, status, chart name, and app version. This helps ensure that Helm is pointed to the correct namespace.
3.3. Checking RBAC Status
We should also verify if RBAC is enabled in our cluster:
$ kubectl api-versions | grep rbac
rbac.authorization.k8s.io/v1
If we see rbac.authorization.k8s.io/v1 in our output, then RBAC is enabled on our cluster.
3.4. Inspecting Current Service Account Permissions
To understand why helm list is failing, it’s helpful to inspect the current permissions of the default service account in the kube-system namespace:
$ kubectl auth can-i --list \
--namespace kube-system \
--as system:serviceaccount:kube-system:default
Resources Non-Resource URLs Resource Names Verbs
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.k8s.io [] [] [create]
configmaps [] [] [get list watch]
...
This command checks the permissions of the default service account in the kube-system namespace. It lists the actions the service account can perform on various resources.
Let’s better understand our output:
- selfsubjectaccessreviews.authorization.k8s.io and selfsubjectrulesreviews.authorization.k8s.io – allow the service account to create requests to review its own access and rules (typically used for introspection and doesn’t affect the ability to list or manage ConfigMaps)
- configmaps – shows the actions the service account can perform on ConfigMaps (in this case, the default service account has get, list, and watch permissions on ConfigMaps, which is enough)
These initial checks can help identify basic configuration issues. If everything is fine with our initial troubleshooting, we can now proceed with more specific solutions.
4. Granting Permissions to the Default Service Account
One straightforward solution to this error is to grant Helm the necessary permissions to access the default service account. We can achieve this by creating RoleBindings or ClusterRoleBindings.
4.1. Granting Read-Only Permissions
We can use kubectl to provide read-only access to the default service account in the kube-system namespace:
$ kubectl create rolebinding default-view \
--clusterrole=view \
--serviceaccount=kube-system:default \
--namespace=kube-system
rolebinding.rbac.authorization.k8s.io/default-view created
Here, we create a rolebinding named default-view that binds the view cluster role to the default service account in the kube-system namespace. The view cluster role has read-only permissions.
4.2. Granting Admin Access
For more extensive permissions, such as those needed to install packages, we can bind the cluster-admin role:
$ kubectl create clusterrolebinding add-on-cluster-admin \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:default
clusterrolebinding.rbac.authorization.k8s.io/add-on-cluster-admin created
This command creates a clusterrolebinding named add-on-cluster-admin that binds the cluster-admin cluster role to the default service account in the kube-system namespace. The cluster-admin role provides full administrative access, which should resolve the error when listing ConfigMaps.
Notably, while this method is quick and easy, granting cluster-admin permissions can pose security risks. We should always follow the principle of least privilege, granting only the permissions necessary for the task at hand.
5. Creating a Dedicated Service Account for Tiller in Helm 2
If we’re using Helm 2, a more secure and manageable approach than granting broad permissions to the default service account is to create a dedicated service account for Tiller. Tiller is the server-side component of Helm 2. It runs inside the Kubernetes cluster and interacts with the Helm client.
Therefore, Tiller’s role requires creating, updating, and deleting Kubernetes resources, often requiring broad permissions. However, assigning such permissions to the default service account can be risky due to potential over-provisioning, which could lead to security vulnerabilities. Thus, creating a dedicated service account for Tiller allows for more granular control over permissions and better adherence to best security practices.
5.1. Creating the Service Account
First, we need to create a dedicated service account (tiller) in the kube-system namespace:
$ kubectl create serviceaccount --namespace kube-system tiller
serviceaccount/tiller created
Here, we create a service account tiller in our kube-system namespace.
5.2. Creating a ClusterRoleBinding
Next, we’ll create a ClusterRoleBinding that assigns the cluster-admin role to the newly created tiller service account:
$ kubectl create clusterrolebinding tiller-cluster-rule \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:tiller
clusterrolebinding.rbac.authorization.k8s.io/tiller-cluster-rule created
We now create a clusterrolebinding named tiller-cluster-rule that binds the cluster-admin role to the tiller service account in our kube-system namespace.
5.3. Updating Helm to Use the New Service Account
Finally, we initialize or upgrade Helm to use the newly created tiller service account:
$ helm init --service-account tiller --upgrade
$HELM_HOME has been configured at /home/user/.helm.
Tiller (the Helm server-side component) has been upgraded to the current version.
Happy Helming!
With these steps, we ensure that Tiller runs with the appropriate permissions under a dedicated service account, resolving the ConfigMaps permission error without over-provisioning access to the default service account. This approach also helps manage permissions more securely and avoids over-provisioning access to the default service account, adhering to security best practices.
6. Restricting Helm’s Access to a Specific Namespace in Helm 2
For an even more secure setup, we can restrict Helm’s access to a specific namespace rather than granting it cluster-wide permissions. This method is especially useful in multi-tenant environments where we want to limit the scope of Tiller’s actions.
6.1. Creating a New Namespace
We start by creating a new namespace for Tiller:
$ kubectl create namespace tiller-world
namespace/tiller-world created
Here, we create a new namespace tiller-world.
6.2. Creating the Service Account
Next, we create a new service account within the tiller-world namespace:
$ kubectl create serviceaccount tiller --namespace tiller-world
serviceaccount/tiller created
This command creates a new service account (tiller) in the tiller-world namespace.
6.3. Defining a Role for Tiller
Roles in Kubernetes grant fine-grained permissions within a specific namespace. This is crucial for maintaining security and adhering to the principle of least privilege, as it allows us to limit the scope of Tiller’s access to just the tiller-world namespace.
Now, we create a Role definition file (role-tiller.yaml) that specifies the permissions for the Tiller service account:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: tiller-manager
namespace: tiller-world
rules:
- apiGroups: ["", "batch", "extensions", "apps"]
resources: ["*"]
verbs: ["*"]
In this YAML file, we define a role tiller-manager with permissions to perform any action on all resources within the tiller-world namespace.
Let’s better understand the rules (permissions) of this Role:
- apiGroups – specifies the API groups that the Role applies to (“” refers to the core API group, and batch, extensions, and apps are additional API groups relevant to Kubernetes resources)
- resources – specifies the types of resources the Role can interact with ([“*”] indicates all resource types within the specified API groups)
- verbs – specifies the actions that can be performed on the resources ([“*”] indicates all possible actions, such as get, list, create, delete)
This level of access ensures that Tiller has the necessary permissions to manage Helm releases effectively in this namespace.
Afterward, we should apply the Role definition file:
$ kubectl create -f role-tiller.yaml
role.rbac.authorization.k8s.io/tiller-manager created
We’ve successfully applied the new role.
6.4. Creating a RoleBinding
We also need to create a RoleBinding definition file (rolebinding-tiller.yaml) to bind the tiller-manager role to the tiller service account:
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: tiller-binding
namespace: tiller-world
subjects:
- kind: ServiceAccount
name: tiller
namespace: tiller-world
roleRef:
kind: Role
name: tiller-manager
apiGroup: rbac.authorization.k8s.io
Our YAML file defines a RoleBinding named tiller-binding that binds the tiller-manager role to the tiller service account in the tiller-world namespace.
Then, we apply the RoleBinding:
$ kubectl create -f rolebinding-tiller.yaml
rolebinding.rbac.authorization.k8s.io/tiller-binding created
Our creation process went successfully.
6.5. Initializing Helm with the New Namespace
Next, we initialize Helm to use the tiller service account and specify the new namespace:
$ helm init \
--service-account tiller \
--tiller-namespace tiller-world
$HELM_HOME has been configured at /home/user/.helm.
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Happy Helming!
Our command initializes Helm to use the tiller service account in the tiller-world namespace.
6.6. Setting the TILLER_NAMESPACE Environment Variable
To ensure that all our Helm operations target the correct namespace, we should set the TILLER_NAMESPACE environment variable:
$ export TILLER_NAMESPACE=tiller-world
Setting the TILLER_NAMESPACE environment variable ensures that all Helm operations are directed to the correct namespace (tiller-world). This step is crucial for consistent operation and avoiding namespace-related issues.
By restricting Helm’s access to a specific namespace, we enhance the security of our Kubernetes cluster by limiting the scope of permissions granted to Tiller.
7. Using Helm 3 to Avoid Tiller
With the release of Helm 3, the need for Tiller has been eliminated, simplifying Helm’s security model and removing a potential attack vector. Therefore, if we’ve no compulsory need to stick to Helm 2, upgrading to Helm 3 is a highly recommended and future-proof solution.
Unlike Helm 2 (which uses a client-server model with Tiller installed in the cluster), Helm 3 uses a client-only architecture. This eliminates the need for Tiller and simplifies the security model, as there’s no need to grant cluster-wide permissions to a Tiller service account.
Thus, we can decide to install Helm 3 afresh. Afterward, we can use Helm 3 to manage our releases directly.
On the other hand, we can use the helm-2to3 plugin to migrate our existing Helm 2 releases to Helm 3.
To do this, first, we install the plugin:
$ helm plugin install https://github.com/helm/helm-2to3
Installed plugin: 2to3
Then, we use the plugin to convert our Helm 2 release to Helm 3:
$ helm 2to3 convert <release_name>
2024/03/30 12:00:00 Release "my-release" will be converted from Helm v2 to Helm v3.
2024/03/30 12:00:00 [Helm 3] Release "my-release" created.
2024/03/30 12:00:00 [Helm 3] ReleaseVersion "my-release.v1" created.
2024/03/30 12:00:00 Release "my-release" was successfully converted from Helm v2 to Helm v3.
We should replace <release_name> with the name of our release.
Moving to Helm 3 eliminates the need for Tiller and simplifies our Kubernetes management processes while enhancing security.
8. Conclusion
In this article, we’ve seen various solutions to address the helm list error related to listing ConfigMaps in the kube-system namespace. We started by understanding the root cause of the error, which was related to RBAC permissions. Then, we discussed multiple approaches to resolve it, including granting permissions to the default service account, creating a dedicated service account for Tiller, restricting Helm’s access to a specific namespace, and upgrading to Helm 3 to avoid Tiller altogether.