1. Introduction
Helm has established itself as Kubernetes’ de facto package manager, simplifying the deployment and management of applications on Kubernetes clusters. By bundling Kubernetes resources into charts, Helm enables us to deploy complex applications quickly and consistently.
In this tutorial, we’ll discuss best practices for using Helm charts across our deployments.
2. Effective Use of Labels and Selectors
Labels and selectors are fundamental to Kubernetes’ architecture, enabling the grouping, selection, and management of resources within a cluster. When used effectively within Helm charts, labels facilitate resource management and offer insights into the deployment’s structure and state.
Furthermore, labels allow us to associate metadata with Kubernetes resources. We can utilize them to identify attributes of resources that are significant to users or tools.
2.1. Labels in Helm Templates
Within a chart, labels are critical in organizing resources, helping with tasks ranging from querying resource status to implementing policies based on these labels.
When defining resources within Helm templates, it’s crucial to include labels that help identify the resource’s purpose, origin, or other characteristics. Also, they must adhere to common standards that promote interoperability and ease of management.
Let’s see an example of how to define labels within a Helm template:
apiVersion: v1
kind: Pod
metadata:
name: "{{ .Release.Name }}-pod"
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.Version }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
As we can see, incorporating a consistent labeling schema across our Helm charts not only aids in resource management but also aligns with best practices recommended by the Kubernetes community. Collaboratively, this makes our charts more intuitive and easier to work with.
2.2. Using Selectors
Selectors allow us to filter and manipulate resources based on their labels.
In Helm charts, we can use selectors in various Kubernetes objects to select a subset of resources.
For example, a Service in Kubernetes uses selectors to determine which pods to route traffic to:
apiVersion: v1
kind: Service
metadata:
name: "{{ .Release.Name }}-service"
spec:
selector:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
ports:
- protocol: TCP
port: 80
targetPort: 9376
Based on their labels, the selector field specifies which pods the service should route traffic to. This Service now routes traffic to pods with labels matching app.kubernetes.io/name and app.kubernetes.io/instance, as defined in the Helm template.
2.3. Centralizing Label Management
Another best practice we can deploy is defining common labels in a helpers.tpl file.
With this single file, we create a single source of truth for labeling across our Helm chart. This centralization makes updating and maintaining labels easier as our application evolves and scales, ensuring consistency across all resources.
Let’s see how we might structure our helpers.tpl file to define standard labels:
{{/*
Standard application labels
*/}}
{{- define "mychart.labels.standard" -}}
app.kubernetes.io/name: {{ .Chart.Name }}
app.kubernetes.io/version: {{ .Chart.Version }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end -}}
Here, the template mychart.labels.standard includes essential labels that are commonly applied to resources within Kubernetes deployments. We can replace mychart with the name of our preferred chart.
2.4. Using Common Labels in Resource Templates
When defining Kubernetes resources in our Helm chart, we can now include these common labels using the include function. Then, we pipe its output to the indent function to ensure proper formatting.
Let’s see a typical example:
apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}-service
labels:
{{ include "mychart.labels.standard" . | indent 4 }}
spec:
ports:
- protocol: TCP
port: 80
targetPort: 80
selector:
{{ include "mychart.labels.standard" . | indent 4 }}
Here, the include function pulls in the common labels from helpers.tpl and applies them to the metadata and the selector of a Service definition. The indent function properly aligns it with the YAML structure.
3. Securing Helm Secrets
Secrets management is critical to deploying applications, primarily when we use Helm to manage deployments in Kubernetes. We need to carefully handle secrets, such as passwords, OAuth tokens, and SSH keys, to avoid accidental exposure.
However, storing secrets in plain text, even within Helm charts, is risky and a bad practice. Kubernetes secrets offer a way to store and manage sensitive information.
But when it comes to templating with Helm, we need additional measures to ensure we don’t expose secrets.
3.1. Using helm-secrets and SOPS
The helm-secrets plugin, used with Mozilla Secrets OPerationS (SOPS), provides a secure method for encrypting, decrypting, and managing secrets within Helm charts.
First, we need to install the helm-secrets plugin:
$ helm plugin install https://github.com/jkroepke/helm-secrets
Downloading and installing helm-secrets v4.6.0 ...
https://github.com/jkroepke/helm-secrets/releases/download/v4.6.0/helm-secrets.tar.gz
Installed plugin: secrets
This shows the successful installation of the helm-secrets plugin, specifically version 4.6.0, into our Helm environment.
3.2. Encrypting Secrets With SOPS
Mozilla’s SOPS is a tool for securely managing files with secrets.
Before using SOPS with Helm, we need to install it too:
$ wget https://github.com/mozilla/sops/releases/download/v3.8.1/sops-v3.8.1.linux
$ chmod +x sops-v3.8.1.linux
$ sudo mv sops-v3.8.1.linux /usr/local/bin/sops
We can replace v3.8.1 with the latest version on the SOPS releases page.
After installation, we can now encrypt our secret file, e.g., a secrets.yaml file:
$ sops -e secrets.yaml > secrets.enc.yaml
Here, the -e flag tells SOPS to encrypt the content. It reads our secrets.yaml file, encrypts it, and outputs the encrypted data to secrets.enc.yaml. We can now safely store this encrypted file in version control.
3.3. Decrypting and Applying Secrets With Helm
When we deploy our Helm chart, we can use helm-secrets to decrypt our secrets on the fly.
Then, we can directly apply them to our Kubernetes cluster:
$ helm secrets upgrade -f secrets.enc.yaml myrelease mychart
Release "myrelease" has been upgraded. Happy Helming!
NAME: myrelease
LAST DEPLOYED: Thu Apr 6 14:35:49 2023
NAMESPACE: default
STATUS: deployed
REVISION: 2
TEST SUITE: None
Our output confirms the successful upgrade of the release without revealing sensitive information.
This best practice ensures secrets’ encryption at rest within our version control system. Decryption only happens when necessary during deployment.
4. Managing Dependencies With Subcharts
Applications we deploy on Kubernetes often consist of multiple and interdependent components. Helm charts can encapsulate these complex applications, managing their components and dependencies cleanly through subcharts. This ensures that all components are deployed in a coordinated fashion.
Specifically, Helm manages dependencies through a combination of the Chart.yaml file and a charts/ directory. This allows us to bundle external charts as subcharts, on which our applications depend.
For instance, let’s imagine our application requires both a backend service and a database with the following structure:
my-application-chart/
├── Chart.yaml
├── values.yaml
└── charts/
├── backend-service/
│ ├── Chart.yaml
│ └── values.yaml
└── database/
├── Chart.yaml
└── values.yaml
Here, my-application-chart/ is the main chart for our application. It has two dependencies (backend-service and database). The directory charts/ contains the subcharts (backend-service and database) that depend on our main application.
Now, to include these subcharts as dependencies, we would specify them in the main chart’s Chart.yaml file:
dependencies:
- name: backend-service
version: 1.2.3
repository: "@local" # Since the subcharts are located locally within the charts/ directory
- name: database
version: 4.5.6
repository: "@local"
In this configuration, dependencies lists all the external charts our main chart depends on, while the subchart details follow. Then, repository shows the subchart’s location. The value @local indicates that the subcharts are stored locally within the charts/ directory of the main chart. However, if we host the subcharts in a remote repository, we would use the URL of that repository instead.
5. Handling Resource Policies
In the lifecycle of a Helm chart deployment, we need to manage how we create, update, and delete resources to maintain state and ensure data persistence. As a best practice, Helm introduces resource policies that allow us to control the lifecycle of Kubernetes resources more granularly.
Notably, resource policies are annotations we can add to Kubernetes resources to influence how Helm handles them during install, upgrade, and delete operations. The most common use case is preventing certain resources, like PersistentVolumeClaims (PVCs), from being deleted when we uninstall a Helm release, preserving critical data.
For example, say we have a templates/pvc.yaml file within the templates directory of our Helm chart for our PVC:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ .Release.Name }}-pvc
annotations:
"helm.sh/resource-policy": keep
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Here, the helm.sh/resource-policy annotation with the value keep tells Helm to skip deleting the resource when we uninstall the Helm release.
However, after uninstallation, since Helm no longer manages the preserved PVC after we uninstall the release, we can only manually handle any future interactions with this PVC (such as deletion or modification) through kubectl or other Kubernetes management tools.
6. Template Functions and Values
Helm charts gain much of their power and flexibility from the template language Helm provides, built on top of Go templates. This templating mechanism allows us to create configurable deployments that adapt to different environments and needs with minimal changes.
6.1. Simplifying Helm Charts With Functions
Using template functions within our Helm charts is a best practice that simplifies complex logic, reduces redundancy, and enhances chart readability.
Helm includes a wide range of built-in functions and supports all functions provided by the Sprig library. It also offers over 100 additional functions for string manipulation, data conversion, mathematical operations, and more.
For example, say we have this values.yaml file:
# values.yaml file
...
# Value used in the ConfigMap example
myValue: customValue
# Number to be multiplied for the ConfigMap
myNumber: 5
# Definition of services for the dynamic Service generation example
services:
- name: web-service
type: LoadBalancer
port: 80
targetPort: 8080
# nodePort is optional, demonstrating conditional inclusion
- name: internal-service
type: ClusterIP
port: 8080
targetPort: 8080
nodePort: 30007
We can use template functions to set a default value and perform operations on values we have in our values.yaml file:
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-config
data:
myValue: {{ default "defaultValue" .Values.myValue | quote }}
myNumber: {{ .Values.myNumber | mul 2 | toString }}
Here, if myValue isn’t specified in values.yaml, it defaults to defaultValue. Then, myNumber value is multiplied by 2, demonstrating how to perform operations on our data before using it in our templates.
6.2. Logic Operations
Template functions aren’t limited to simple operations. They can be instrumental in performing complex logic, such as conditionally including resources based on values, iterating over lists, and generating dynamic content based on user input.
Let’s see another example that uses a range loop and conditional statements within a template:
{{- range .Values.services }}
apiVersion: v1
kind: Service
metadata:
name: {{ .name }}
spec:
type: {{ .type | default "ClusterIP" }}
ports:
- port: {{ .port }}
targetPort: {{ .targetPort }}
protocol: TCP
{{- if .nodePort }}
nodePort: {{ .nodePort }}
{{- end }}
{{- end }}
Here, our template generates a Service resource for each entry in the services list defined in our values.yaml. This list defines configurations for two different Kubernetes Service resources. The first service (web-service) is of type LoadBalancer, and the second (internal-service) includes an optional nodePort.
This best practice shows how the Helm template can conditionally include properties based on the presence of values in the values.yaml file. With this approach, we can flexibly write Helm templates to generate dynamic Kubernetes resource definitions.
7. Automatic Updates With ConfigMaps and Secrets
A common challenge in Kubernetes deployments is automatically updating applications when their configurations (stored in ConfigMaps or Secrets) change.
Without proper mechanisms, our updates to ConfigMaps or Secrets don’t automatically trigger rolling updates of dependent pods.
One best practice to achieve automatic updates involves using a checksum of a ConfigMap or Secret as an annotation in our deployment template. This method ensures that any change in the ConfigMap or Secret triggers a rolling update of the pods.
Let’s see how we can best implement this in a Helm template.
Peradventure, we have a ConfigMap (configmap.yaml) within the templates directory of our Helm chart in use by our application:
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-config
data:
my-config-value: "Hello, World!"
Next, we can modify our Deployment template to include an annotation that is a checksum of the ConfigMap:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}
spec:
replicas: 1
selector:
matchLabels:
app: my-application
template:
metadata:
labels:
app: my-application
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
spec:
containers:
- name: my-container
image: "my-image:latest"
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: {{ .Release.Name }}-config
In this template, checksum/config is an annotation on the Deployment pod template, calculated as the SHA-256 checksum of the ConfigMap contents. This is done using Helm’s include function. It renders the ConfigMap template as a string, and the sha256sum function computes the checksum.
Now, when we make a change to the ConfigMap (e.g., changing my-config-value in our configmap.yaml), the checksum value in the Deployment annotation will change because the content of the ConfigMap has changed.
Then, Kubernetes notices the change in the Deployment’s pod template and performs a rolling update of our pods to match the latest configuration.
8. Avoiding Secret Regeneration With the lookup Function
A common issue in Helm chart development is managing resources that, once created, should not be arbitrarily changed or regenerated. Secrets, often containing sensitive information like passwords or tokens, are a prime example. Regenerating these can disrupt applications that rely on consistent credentials.
Fortunately, the lookup function allows us to check if a secret already exists before creating it, effectively avoiding unwanted regeneration. The lookup function also allows us to dynamically query Kubernetes resources from within a chart template.
Let’s assume we’re deploying an application that requires a database password stored in a Kubernetes secret. The application’s Helm chart should ensure the database password remains consistent across updates unless we explicitly change or delete the secret.
To do this, a best practice is to create a template file for our secret, e.g., templates/secret.yaml, with the lookup logic to prevent regeneration:
{{- $existingSecret := lookup "v1" "Secret" .Release.Namespace "my-app-db-secret" }}
{{- if not $existingSecret }}
apiVersion: v1
kind: Secret
metadata:
name: my-app-db-secret
type: Opaque
data:
db-password: {{ randAlphaNum 12 | b64enc | quote }}
{{- end }}
Here, we check if a secret my-app-db-secret already exists. If not, it creates the secret with a randomly generated password.
After deploying our application using the Helm chart, it creates the secret the first time we deploy.
But sometimes, we might need to upgrade the application, modify our deployment configurations, or update the application version in our Helm chart. However, due to the lookup function, Helm will not regenerate the my-app-db-secret secret if it already exists. This ensures our application continues to use the original password.
9. Testing Helm Charts
Helm provides best practices for testing charts, most notably the helm test command. This command allows us to define test cases as Kubernetes pod definitions that run a set of tests against our deployed application and then report back on the results.
Specifically, a test case’s definition in Helm is a Kubernetes pod that performs a specific set of operations and then exits with a success or failure status. We execute the tests using the helm test command. This runs pods annotated with a specific Helm hook (helm.sh/hook: test-success).
Let’s see an example of defining and running a test that checks the connectivity to a service deployed by our Helm chart:
apiVersion: v1
kind: Pod
metadata:
name: "{{ .Release.Name }}-test-connection"
labels:
purpose: "helm-test"
annotations:
"helm.sh/hook": test-success
spec:
containers:
- name: curl-container
image: curlimages/curl:latest
command: ['curl']
args: ['-s', '{{ include "mychart.serviceUrl" . }}']
restartPolicy: Never
Here, this test Pod attempts to connect to the service URL defined by the mychart.serviceUrl template. Then, the container uses the curlimages/curl image to perform a curl command against the service. The test is successful if the pod exits with a status code 0 (indicating a successful connection).
*After deploying our Helm chart, we can now run the test with helm test [RELEASE_NAME]:*
$ helm test myapp
Pod myapp-test-connection pending
Pod myapp-test-connection running
Pod myapp-test-connection succeeded
NAME: myapp
LAST DEPLOYED: Thu Apr 6 14:22:35 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: myapp-test-connection
Last Started: Thu Apr 6 14:25:47 2023
Last Completed: Thu Apr 6 14:25:49 2023
Phase: Succeeded
NOTES:
...
In this example, Helm executes all test pods within the myapp release, which we define in the Helm chart for myapp, and with specific Helm hooks for testing.
As we can see, Phase: Succeeded shows the test’s successful execution.
Otherwise, we would typically see Phase: Failed.
Finally, to avoid common pitfalls that could cause failure in our charts, we might want to leverage a well-known chart repository, such as Bitnami, which hosts various production-ready charts:
$ helm repo add bitnami https://charts.bitnami.com/bitnami
"bitnami" has been added to your repositories
Bitnami maintains this repository, offering reliable and up-to-date charts for various applications.
After adding a repository, we can search for an existing chart that fits our needs.
10. Conclusion
In this article, we explored best practices for using Helm charts. We covered various topics essential for effective Kubernetes application deployment and management.
From leveraging the Helm ecosystem and managing dependencies with subcharts to securing secrets and ensuring idempotent deployments, these best practices will enhance our Helm chart deployments’ maintainability, security, and efficiency.