1. Overview
A policy engine on a Kubernetes cluster makes policy management and enforcement easy. In this tutorial, we’ll learn about the open-source Kubernetes policy engine Kyverno.
Notably, the article aims to provide a friendly introduction to Kyverno. This means, that the examples we use will be relatively simpler than what the Kyverno policy engine is capable of. This tradeoff is necessary to keep the article concise to achieve the goal we set out.
2. Kyverno
Kyverno is an open-source Kubernetes policy engine that provides a policy-as-code framework for enforcing custom policies on a Kubernetes cluster. It works by running as the Kubernetes dynamic admission controller. When the kube-apiserver receives a request, it passes the request data to the controllers. Then, the Kyverno controller can inspect the request data and react to the request depending on the policies.
One great strength of Kyverno over other Kubernetes policy engines is that we can write the custom policies as Kubernetes resources. In other words, we don’t need to learn a new language just to create policy.
The benefit of operating a policy engine on a Kubernetes cluster might not be profound for a cluster with a small number of users. This is because, on a small scale, the cluster policies are usually a matter of communication among the users.
However, when there are more users, concise communication gets harder and it’s more difficult to monitor the cluster to ensure everyone’s following the best practice. With a policy engine, the cluster administrator only needs to install the relevant policies and the engine ensures all the resources on the cluster comply with the policy.
3. Installation
Kyverno offers multiple ways to install the necessary components onto our Kubernetes cluster. The simplest way to install the Kyverno policy engine is to run kubectl create on the manifest YAML file. This installs all the necessary resources on the cluster:
$ kubectl create -f https://github.com/kyverno/kyverno/releases/download/v1.10.0/install.yaml
namespace/kyverno created
serviceaccount/kyverno-admission-controller created
serviceaccount/kyverno-background-controller created
...
The command above runs the kubectl create command and passes the install.yaml manifest file to the command.
Despite its simplicity, this method isn’t encouraged for production usage. Rather, we should use the Kyverno Helm chart for production installation. This is because the Helm chart way provides a better facility to perform upgrades or rollbacks in the future. Furthermore, we can customize the installation easily using a Helm chart, which is especially important for a production installation.
For a complete helm chart step-by-step tutorial, we can check out the official guide.
4. Creating First Policy
To gain a better understanding of the Kyverno policy engine, we’ll create a simple policy and apply it to our cluster. The policy is a simple validation rule that ensures all the Deployment resources in our cluster have the owner label.
In Kyverno, we create and define our policy by creating the ClusterPolicy object. The ClusterPolicy object is the object that defines the rule and the resources that the rule should apply to.
Let’s look at an example of ClusterPolicy that enforces the owner label validation rule:
$ cat validate-deployment-label-owner.yaml <<EOF
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: mandatory-ownership-info-policy
spec:
validationFailureAction: Enforce
rules:
- name: verify-owner-label
match:
any:
- resources:
kinds:
- Deployment
validate:
message: "Every deployment must label an 'owner'"
pattern:
metadata:
labels:
owner: "?*"
EOF
We defined a ClusterPolicy object and named it mandatory-ownership-info-policy. In the spec section, we define an array of rules that are associated with the policy. Specifically, our policy only states that all the Deployment resources must have at least one alphanumeric value on the owner label.
4.1. Validation Policy in Action
To see the policy in action, we can create a Deployment resource without specifying an owner label:
$ kubectl create deployment redis --image=redis
error: failed to create deployment: admission webhook "validate.kyverno.svc-fail" denied the request:
resource Deployment/default/redis was blocked due to the following policies
mandatory-ownership-info-policy:
verify-owner-label: 'validation error: Every deployment must label an ''owner''.
rule verify-owner-label failed at path /metadata/labels/team/'
As expected, our resource creation fails with the error message Every deployment must label an ‘owner’. This is because upon the kubectl create request, the Kyverno policy engine runs the creation request against all the policies in the cluster. Then, the mandatory-ownership-info-policy in our cluster denies the request as it lacks the owner label.
Next, let’s take a look at the pieces of a ClusterPolicy in detail.
4.2. Action on Validation Fail
In the specification, the validationFailureAction field specifies the action to take when a request violates the policy we’re defining. The possible values for this field are Enforce and Audit. Furthermore, this field is only applicable when we’re defining a validate rule.
With the Enforce value, we reject any requests for resource creation that violate the rules. Alternatively, we can specify Audit as the validationFailureAction, which only logs the violation and allows the non-compliant requests to pass through.
4.3. Rule Type
The ClusterPolicy object accepts an array of rules that specify the rule’s specifications. There are three main rule types for the ClusterPolicy: validate, mutate, and generate rule and each of them serves different purposes.
First, the validate rule expresses a pattern the resource’s definition must match to be considered compliant. When the targeted resource doesn’t comply with the rule, the policy engine takes action according to the value of the validationFailureAction field. Furthermore, the message field specifies the error message to return when there’s a non-compliant request.
For example, the verify-owner-label rule in our example above states that the targeted resource must have a metadata.labels.owner element. Additionally, the rule enforces that the value must have at least a single alphanumeric character.
Then, the mutate rule mutates the resource definition according to the definition. For example, we can define a mutate rule that sets the imagePullPolicy of a container to Always if the container image tag is “latest“:
rules:
- name: default-latest-image-always-pull
match:
any:
- resources:
kinds:
- Pod
mutate:
patchStrategicMerge:
spec:
containers:
- (image): "*:latest"
imagePullPolicy: "Always"
The parentheses surrounding the image element form the conditional anchor which works like an if statement. In the example above, the syntax essentially means that if the Pod specifies a container that has the image value ending with “:latest”, then we mutate the resource to apply the imagePullPolicy to Always.
Finally, the generate rule automates the creation of resources depending on a condition. One prominent use case of the generate rule is to populate Secret objects when a new namespace is created:
spec:
rules:
- name: metric-server-api-key-populator
match:
any:
- resources:
kinds:
- Namespace
exclude:
any:
- resources:
namespaces:
- kube-system
- default
- kube-public
- kyverno
generate:
synchronize: true
apiVersion: v1
kind: Secret
name: metric-server-api-key
namespace: "{{request.object.metadata.name}}"
data:
kind: Secret
data:
.api-key: c3VwZXJzZWNyZXQ=
Notably, the generate rule above specifies an additional exclude matcher, which has the effect of excluding the rules from the namespaces specified in the list. After we’ve installed the metric-server-api-key-populator policy, creating a new namespace on the cluster causes the Secret object metric-server-api-key to be automatically created.
5. Conclusion
In this article, we discussed why a policy engine is beneficial for enforcing standards on a Kubernetes cluster. Then, we explored the Kyverno open-source Kubernetes policy engine.
Subsequently, we demonstrated the process of creating a policy that enforces the presence of the owner label for all the Deployment resources on the cluster. Furthermore, we understood the three different rule types that are available on the Kyverno policy engine: validate, mutate, and generate.