1. Introduction

Helm charts are essential tools for managing Kubernetes applications. They package all the Kubernetes resources we need to deploy an application into a single unit, making deployments more straightforward and reproducible. However, like any configuration management tool, Helm charts are susceptible to errors that can lead to failed deployments or misconfigured applications.

In this tutorial, we’ll explore various methods to validate Helm chart content. Doing so ensures our deployments are reliable and free from common pitfalls. We’ll cover tools and techniques such as helm lint, helm template, advanced schema validation methods, and some best practices. Let’s get started!

2. Why Validate Helm Charts?

Validating Helm charts is crucial for several reasons. First, it ensures that our configurations are correct and complete. Misconfigurations can lead to deployment failures, unexpected behavior, or even security vulnerabilities. By validating our Helm charts, we catch these issues early in the development cycle.

Common pitfalls in Helm charts include syntax errors, missing fields, and incorrect resource specifications. These errors can be challenging to diagnose if not caught early. However, validation tools help us identify these problems before they reach production, saving us time and resources.

Moreover, thorough validation benefits both development and CI/CD pipelines. During development, validation helps maintain code quality and consistency. Also, in CI/CD pipelines, it ensures that we deploy only valid configurations, reducing the risk of runtime errors and service disruptions.

3. Using helm lint

The helm lint command is a straightforward way to check our Helm charts for potential issues. It analyzes the chart’s structure and content, identifying problems that could cause deployment failures.

3.1. Checking a Single Chart

Let’s see helm lint in action by running it on a specific chart:

$ helm lint ./mychart

==> Linting ./mychart
[ERROR] Chart.yaml: version is required
[INFO] Chart.yaml: icon is recommended

Error: 1 chart(s) linted, 1 chart(s) failed

As we can see from our output:

  • [ERROR] Chart.yaml: version is required – indicates a critical issue that we must fix
  • [INFO] Chart.yaml: icon is recommended – shows a non-critical suggestion for improvement

Using helm lint helps us catch common issues early in the development process. It ensures that our charts are well-structured and adhere to best practices.

3.2. Checking All Charts in a Directory

We can extend the utility of helm lint to validate all charts within a directory, including any subcharts. This approach is useful when we’re managing multiple Helm charts.

Let’s assume we have a directory structure as follows:

charts/
├── chart1
│   ├── Chart.yaml
│   ├── templates/
│   └── ...
├── chart2
│   ├── Chart.yaml
│   ├── templates/
│   └── ...
└── subcharts/
    ├── subchart1
    │   ├── Chart.yaml
    │   ├── templates/
    │   └── ...
    └── subchart2
        ├── Chart.yaml
        ├── templates/
        └── ...

We can use a for loop to run helm lint on each subdirectory (chart) within the charts directory:

$ for chart in charts/*; do helm lint $chart; done

==> Linting charts/chart1
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, 0 chart(s) failed

==> Linting charts/chart2
[ERROR] Chart.yaml: version is required
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, 1 chart(s) failed

==> Linting charts/subcharts/subchart1
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, 0 chart(s) failed

==> Linting charts/subcharts/subchart2
[ERROR] Chart.yaml: version is required
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, 1 chart(s) failed

This loop can help us to check all charts, including subcharts, for potential issues.

Notably, a best practice for linting Helm charts is to run helm lint regularly during development. We should also address all errors and consider resolving warnings. Additionally, we can integrate helm lint into CI/CD pipelines to automate validation.

4. Using helm template With Kubernetes Validation

The helm template command renders our Helm chart templates locally, allowing us to inspect the resulting Kubernetes manifests. By combining this with Kubernetes’ dry-run feature, we can validate the generated manifests without actually deploying them.

Let’s see an example:

$ helm template ./mychart | kubectl apply --dry-run=client -f -
error: error validating "STDIN": error validating data: ValidationError(Deployment.spec.template.spec.containers[0]): unknown field "imagePullSecrets" in io.k8s.api.core.v1.Container

In our command here, helm template ./mychart renders the Helm chart located in the ./mychart directory. Then, kubectl apply –dry-run=client -f – validates the rendered manifests against Kubernetes API schemas without applying them.

Let’s better understand our sample output:

  • ValidationError(Deployment.spec.template.spec.containers[0]) – indicates an error in the first container specification within a Deployment resource
  • unknown field “imagePullSecrets” – shows that the field imagePullSecrets is incorrectly placed

Using helm template with Kubernetes validation helps us ensure that the rendered manifests are valid Kubernetes resources.

Compared to helm lint, helm template with Kubernetes validation catches issues that helm lint might miss. It also provides detailed error messages from the Kubernetes API and ensures that rendered manifests conform to Kubernetes standards.

5. Using kubeconform for Schema Validation

kubeconform is a powerful tool for validating Kubernetes manifests against their respective JSON schemas. By using it in conjunction with helm template, we can ensure that our rendered manifests adhere to the expected schema definitions.

5.1. Basic Validation With kubeconform

Let’s see how we can leverage kubeconform for basic schema validation:

$ helm template ./mychart | kubeconform -strict
/path/to/manifest.yaml - Deployment my-deployment is valid
/path/to/manifest.yaml - Service my-service is valid

Here, helm template ./mychart renders the Helm chart in the ./mychart directory. Then, kubeconform -strict validates the rendered manifests with strict schema checks.

Our example output shows that the Deployment and Service resources conform to the expected schema definitions.

5.2. Handling CRDs With kubeconform

kubeconform also provides additional options to handle complex scenarios, such as CRDs (Custom Resource Definitions).

Thus, if our chart includes CRDs, we can use the -ignore-missing-schemas flag to skip validation for unknown schemas:

$ helm template ./mychart | kubeconform -strict -ignore-missing-schemas
/path/to/manifest.yaml - Deployment my-deployment is valid
/path/to/manifest.yaml - CustomResourceDefinition my-crd is ignored (schema missing)

Here, -ignore-missing-schemas skips validation for CRDs or any resource types without a known schema. This is particularly useful when we’re working with custom resources that might not have a predefined schema in kubeconform‘s database.

5.3. Specifying Kubernetes Version

To ensure compatibility with specific Kubernetes versions, we can also use the -kubernetes-version flag:

$ helm template ./mychart | kubeconform -strict -kubernetes-version 1.18
/path/to/manifest.yaml - Deployment my-deployment is valid
/path/to/manifest.yaml - Service my-service is valid

Here, -kubernetes-version 1.18 validates the manifests against the schemas for Kubernetes version 1.18.

Notably, for a more comprehensive validation process, we can combine both CRD handling and version specification:

$ helm template ./mychart | kubeconform -strict -ignore-missing-schemas -kubernetes-version 1.18
/path/to/manifest.yaml - Deployment my-deployment is valid
/path/to/manifest.yaml - CustomResourceDefinition my-crd is ignored (schema missing)
/path/to/manifest.yaml - Service my-service is valid

This command combines the strict schema validation with skipping unknown schemas and specifying a Kubernetes version.

In short, using kubeconform enhances the robustness of our validation process, ensuring that our Helm charts generate manifests that comply with Kubernetes standards.

6. Validating With values.schema.json

While basic validation methods like helm lint and kubeconform cover most use cases, advanced validation techniques can help us catch even more complex issues. One of these techniques is the values.schema.json.

The values.schema.json file allows us to impose a structure on our values.yaml file, ensuring that user-provided values adhere to a predefined schema.

6.1. Defining the Schema

The values.schema.json provides detailed specifications for our values.yaml. This is crucial for maintaining consistency and correctness.

Let’s see an example schema:

{
  "$schema": "https://json-schema.org/draft-07/schema#",
  "properties": {
    "image": {
      "description": "Container Image",
      "properties": {
        "repo": {
          "type": "string"
        },
        "tag": {
          "type": "string"
        }
      },
      "type": "object"
    },
    "name": {
      "description": "Service name",
      "type": "string"
    },
    "port": {
      "description": "Port",
      "minimum": 0,
      "type": "integer"
    },
    "protocol": {
      "type": "string"
    }
  },
  "required": [
    "protocol",
    "port"
  ],
  "title": "Values",
  "type": "object"
}

Here, our sample schema defines image, name, port, and protocol properties. It also specifies that protocol and port are required fields. In addition, it enforces that repo and tag are strings, port is an integer, and provides descriptions for each property.

Now, when we use commands like helm install, helm upgrade, helm lint, or helm template, Helm automatically validates the values.yaml file against this schema.

6.2. Nested Properties and Constraints

To further illustrate, let’s see a more complex schema that includes nested properties and additional constraints:

{
  "$schema": "https://json-schema.org/draft-07/schema#",
  "properties": {
    "image": {
      "description": "Container Image",
      "properties": {
        "repo": {
          "type": "string"
        },
        "tag": {
          "type": "string"
        },
        "pullPolicy": {
          "type": "string",
          "enum": ["Always", "IfNotPresent", "Never"]
        }
      },
      "type": "object",
      "required": ["repo", "tag"]
    },
    "resources": {
      "description": "Resource requests and limits",
      "properties": {
        "requests": {
          "properties": {
            "cpu": {
              "type": "string"
            },
            "memory": {
              "type": "string"
            }
          },
          "type": "object",
          "required": ["cpu", "memory"]
        },
        "limits": {
          "properties": {
            "cpu": {
              "type": "string"
            },
            "memory": {
              "type": "string"
            }
          },
          "type": "object"
        }
      },
      "type": "object"
    },
    "name": {
      "description": "Service name",
      "type": "string"
    },
    "port": {
      "description": "Port",
      "minimum": 0,
      "type": "integer"
    },
    "protocol": {
      "type": "string"
    }
  },
  "required": [
    "protocol",
    "port",
    "name"
  ],
  "title": "Values",
  "type": "object"
}

Let’s better understand this schema:

  • Nested Properties: Adds pullPolicy under image and resources with nested requests and limits
  • Enums: The pullPolicy field is constrained to specific values – Always, IfNotPresent, or Never
  • Additional Required Fields: repo and tag under image, and cpu and memory under requests are required

Similar to our previous interaction, commands like helm install, helm upgrade, helm lint, or helm template will validate the values.yaml file against this schema.

Ultimately, schema validation’s main benefit is ensuring consistency and correctness of user-provided values. It also catches errors early, preventing us from applying invalid configurations while improving the maintainability and readability of our Helm charts.

7. Custom Validation Scripts Using Go Templates

In some cases, using a static schema file might not be sufficient for our validation needs. We can use Go templates to write custom validation scripts for a more flexible approach. These scripts can implement dynamic logic to validate Helm chart values based on specific conditions.

7.1. Basic Custom Validation Script

Let’s see an example of a custom validation script using the Go template:

{{- if .Values.some_feature.enabled -}}
  {{- if and (not .Values.some_feature.ip) (not .Values.some_feature.dns) -}}
    When some_feature is enabled, either ip or dns must be provided.
  {{- end -}}
{{- end -}}

In this script, {{- if .Values.some_feature.enabled -}} checks if the some_feature is enabled. Then, {{- if and (not .Values.some_feature.ip) (not .Values.some_feature.dns) -}} ensures that either ip or dns is provided if some_feature is enabled. If both ip and dns are missing, it displays the error message.

7.2. Validating Multiple Conditions

We can also decide to validate multiple conditions to manage more complex scenarios:

{{- if .Values.database.enabled -}}
  {{- if and (not .Values.database.host) (not .Values.database.port) -}}
    When the database is enabled, both host and port must be provided.
  {{- else if not .Values.database.host -}}
    When the database is enabled, host must be provided.
  {{- else if not .Values.database.port -}}
    When the database is enabled, port must be provided.
  {{- end -}}
{{- end -}}

{{- if and .Values.cache.enabled (not .Values.cache.size) -}}
  When cache is enabled, size must be provided.
{{- end -}}

Here, {{- if .Values.database.enabled -}} checks if the database is enabled. Then, {{- if and (not .Values.database.host) (not .Values.database.port) -}} ensures both host and port are provided if the database is enabled. Also, additional conditions check if either host or port is missing.

Lastly, for the cache validation, {{- if and .Values.cache.enabled (not .Values.cache.size) -}} ensures size is provided if the cache is enabled.

7.3. Organizing Validation Scripts

For maintainability, we should store validation scripts in a dedicated directory, such as /tests or /validations. This approach helps keep validation logic organized and easily maintainable, grouping related validations together.

Let’s see a best practice example directory structure:

chart/
├── templates/
│   ├── deployment.yaml
│   └── service.yaml
├── values.yaml
└── validations/
    ├── database_validation.yaml
    ├── feature_validation.yaml
    └── cache_validation.yaml

By using Go templates for validation, we can write more complex and dynamic checks that static JSON schemas can’t handle. We can validate combinations of values, conditional logic, and other intricate scenarios, enhancing our Helm charts’ robustness by catching more nuanced errors.

8. Conclusion

In this article, we explored various methods and best practices for validating Helm charts. We discussed tools like helm lint, helm template, and kubeconform to catch errors early and ensure that our charts generate valid Kubernetes manifests. We also examined advanced techniques, such as using values.schema.json and custom validation scripts for additional validation layers for complex scenarios.

By integrating these validation steps into our development workflows and CI/CD pipelines, we can enhance the reliability and stability of our Kubernetes applications and deployments.