1. Introduction
Kubernetes (K8s) is an open-source solution for creating and orchestrating containers as pods. Its control plane comprises controllers and management pods that drive a cluster, i.e., Kubernetes nodes. This way, worker pods with user applications can get scheduled for execution on any of the cluster nodes. Further, this system can be extended via custom resource types, controllers, and agents.
KubeVirt is an open-source extension for Kubernetes, which enables virtual machine (VM) management directly from a cluster.
In this tutorial, we explore KubeVirt by showing a basic deployment and its usage. First, we get a general sense of the project and its features. After that, we go through a step-by-step deployment of KubeVirt. Finally, we show a basic demonstration of creating virtual machines within Kubernetes.
We tested the code in this tutorial on Debian 12 (Bookworm) with GNU Bash 5.2.15. Unless otherwise specified, it should work in most POSIX-compliant environments.
2. KubeVirt
The open-source KubeVirt project aims to integrate containerization and virtualization management within the Kubernetes framework:
+---------------------+
| KubeVirt |
=========================
| Orchestration (K8s) |
+---------------------+
| Scheduling (K8s) |
+---------------------+
| Container Runtime |
=========================
| Operating System |
+---------------------+
| (Virtual) |
=========================
| Physical |
+---------------------+
Specifically, the project enables declarative virtual machine creation and management.
To achieve this, KubeVirt implements several resource types, similar to those for pods:
- VirtualMachine (VM): virtual machine
- VirtualMachineInsntance (VMI): virtual machine instance from VM
- VirtualMachineInstanceReplicaSet (VMIRS): replica set based on VMI
Further, it adds custom controller pods to handle them:
- virt-controller: more or less, the main virtualization component for monitoring every VMI and managing VM pods
- virt-launcher: handles the actual initiation and signal processing of a VMI pod
- virt-handler: daemon that runs on every host, reacting to changes like failures and restarts
- libvirtd: virt-launcher leverages this daemon for VMI lifecycle management
With these extensions in place, we can create KubeVirt-managed VMI pods.
However, to ensure the system works on nodes as well, KubeVirt needs a node-specific daemon called virt-handler.
Once we configure these components, we can manage virtual machines within Kubernetes:
- create VM via definition
- add (schedule) VM
- run VM
- stop VM
- delete VM
Of course, these options come with the usual automation that we can expect from Kubernetes:
- pod control
- storage management
- network handling
Importantly, KubeVirt enables the deployment of virtualized applications as VM-only and as a VM-container hybrid combination.
Now, let’s understand how to deploy KubeVirt.
3. KubeVirt Deployment
After getting to know its internals, let’s see a practical deployment of KubeVirt.
3.1. Deploy Kubernetes
Since Kubernetes setup isn’t our focus, we use minikube to get a basic one-node cluster up and running. Although we can deploy Kubernetes and create a cluster on a single node without minikube, its use usually enables us to get a working environment quicker.
Fist, we install minikube on the system:
$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
Next, we start a cluster:
$ minikube start --cni=flannel
If working on a VM, we can use the –driver=none option. However, running virtual machines within a virtualized environment also requires nested virtualization or emulation, which we look at later.
Now, let’s alias the kubectl command:
$ alias kubectl='minikube kubectl --'
At this point, we should have a working Kubernetes mini-cluster:
$ kubectl get all --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/coredns-5dd5756b68-z5pqr 1/1 Running 0 2m35s
kube-system pod/etcd-xost 1/1 Running 0 2m47s
kube-system pod/kube-apiserver-xost 1/1 Running 0 2m47s
kube-system pod/kube-controller-manager-xost 1/1 Running 0 2m47s
kube-system pod/kube-proxy-cvrzj 1/1 Running 0 2m35s
kube-system pod/kube-scheduler-xost 1/1 Running 0 2m47s
kube-system pod/storage-provisioner 1/1 Running 0 2m45s
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2m49s
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 2m47s
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/kube-proxy 1 1 1 1 1 kubernetes.io/os=linux 2m47s
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/coredns 1/1 1 1 2m47s
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/coredns-5dd5756b68 1 1 1 2m36s
So, let’s move on to KubeVirt.
3.2. minikube KubeVirt Addon
The minikube implementation supports a way to deploy the kubevirt addon directly:
$ minikube addons enable kubevirt
To verify the installation status, we can check the logs of the kubevirt-install-manager pod, specific to minikube:
$ kubectl logs pod/kubevirt-install-manager --namespace kube-system
However, this method doesn’t really reflect how we would deploy on a production Kubernetes installation. Further, the KubeVirt module has had problems in some versions of minikube. So, let’s use kubectl instead.
3.3. kubectl KubeVirt Addon
We begin by first acquiring the necessary version information of the addon:
$ VERSION="$(curl --silent https://storage.googleapis.com/kubevirt-prow/release/kubevirt/kubevirt/stable.txt)"
At this point, VERSION contains the correct version of the latest KubeVirt release:
$ kubectl create --filename="https://github.com/kubevirt/kubevirt/releases/download/${VERSION}/kubevirt-operator.yaml"
namespace/kubevirt created
customresourcedefinition.apiextensions.k8s.io/kubevirts.kubevirt.io created
priorityclass.scheduling.k8s.io/kubevirt-cluster-critical created
clusterrole.rbac.authorization.k8s.io/kubevirt.io:operator created
serviceaccount/kubevirt-operator created
role.rbac.authorization.k8s.io/kubevirt-operator created
rolebinding.rbac.authorization.k8s.io/kubevirt-operator-rolebinding created
clusterrole.rbac.authorization.k8s.io/kubevirt-operator created
clusterrolebinding.rbac.authorization.k8s.io/kubevirt-operator created
deployment.apps/virt-operator created
With its create subcommand, kubectl can process and initialize objects as defined within a YAML file. In this case, we specify the latter via a URL to the official KubeVirt repository. Thus, we can already see some operator objects in the new namespace called kubevirt.
Next, we deploy the custom resource definitions in a similar manner:
$ kubectl create --filename="https://github.com/kubevirt/kubevirt/releases/download/${VERSION}/kubevirt-cr.yaml"
kubevirt.kubevirt.io/kubevirt created
Thus, we should have a functioning KubeVirt deployment.
3.4. Cluster With KubeVirt
At this point, the cluster has expanded with components in the kubevirt namespace:
$ kubectl get all --namespace kubevirt
Warning: kubevirt.io/v1 VirtualMachineInstancePresets is now deprecated and will be removed in v2.
NAME READY STATUS RESTARTS AGE
pod/virt-api-668b69dd4-4tqsr 1/1 Running 0 69m
pod/virt-controller-7b6686f4ff-jfgpf 1/1 Running 0 69m
pod/virt-controller-7b6686f4ff-vtzwd 1/1 Running 0 69m
pod/virt-handler-6dkb5 1/1 Running 0 69m
pod/virt-operator-656b9658fc-lsb4x 1/1 Running 0 105m
pod/virt-operator-656b9658fc-wpm9f 1/1 Running 0 105m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubevirt-operator-webhook ClusterIP 10.108.24.17 443/TCP 69m
service/kubevirt-prometheus-metrics ClusterIP None 443/TCP 69m
service/virt-api ClusterIP 10.107.156.67 443/TCP 69m
service/virt-exportproxy ClusterIP 10.99.78.172 443/TCP 69m
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/virt-handler 1 1 1 1 1 kubernetes.io/os=linux 69m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/virt-api 1/1 1 1 69m
deployment.apps/virt-controller 2/2 2 2 69m
deployment.apps/virt-operator 2/2 2 2 105m
NAME DESIRED CURRENT READY AGE
replicaset.apps/virt-api-668b69dd4 1 1 1 69m
replicaset.apps/virt-controller-7b6686f4ff 2 2 2 69m
replicaset.apps/virt-operator-656b9658fc 2 2 2 105m
NAME AGE PHASE
kubevirt.kubevirt.io/kubevirt 70m Deployed
As expected, we have two new kubevirt pods and a kubevirt replica set.
There is also an automated way to see the current phase of the addon:
$ kubectl get kubevirt.kubevirt.io/kubevirt --namespace kubevirt --output=jsonpath="{.status.phase}"
Deployed
This way, we verified our KubeVirt addon as Deployed.
3.5. Nested Virtualization or Emulation
Since this isn’t an uncommon case in many environments, if we are running Kubernetes and KubeVirt within a VM, nested virtualization or emulation is a requirement.
To check nested virtualization, we can use the /proc pseudo-filesystem:
- Intel: /sys/module/kvm_intel/parameters/nested
- AMD: /sys/module/kvm_amd/parameters/nested
Let’s do that for Intel:
$ cat /sys/module/kvm_intel/parameters/nested
Y
Any output except Y or 1 means it’s disabled.
So, we might need to enable nested virtualization. To do so, we first stop any running VM.
To make the instructions universal, let’s load the current processor type in a variable:
$ CPU=intel # we can also use amd
After that, we unload the kvm_probe kernel module:
$ modprobe --remove kvm_$CPU
Next, we enable nesting:
$ modprobe kvm_$CPU nested=1
Finally, we can persist the setting within a modprobe configuration file:
$ cat /etc/modprobe.d/kvm.conf
[...]
options kvm_$CPU nested=1
[...]
Of course, all steps are the same for AMD, but use CPU=amd or _amd instead.
Alternatively, if our setup doesn’t support nested virtualization, we can –patch KubeVirt to useEmulation:
$ kubectl --namespace kubevirt patch kubevirt kubevirt --type=merge --patch '{"spec":{"configuration":{"developerConfiguration":{"useEmulation":true}}}}'
Although slower, emulation would at least enable us to continue with the deployment.
3.6. virtctl
The virtctl tool is to KubeVirt what kubectl is to Kubernetes: the main command-line interface.
Just like kubectl, it’s a standalone executable, which we can get and install manually.
First, we check and store the deployed version of KubeVirt and the current architecture:
$ VERSION=$(kubectl get kubevirt.kubevirt.io/kubevirt --namespace kubevirt --output=jsonpath="{.status.observedKubeVirtVersion}")
$ ARCH=$(uname --kernel-name | tr A-Z a-z)-$(uname --machine | sed 's/x86_64/amd64/') || windows-amd64.exe
After that, we download the respective binary as virtctl:
$ curl --location --output virtctl "https://github.com/kubevirt/kubevirt/releases/download/${VERSION}/virtctl-${VERSION}-${ARCH}"
Finally, we make it executable and install it in a $PATH directory like /usr/local/bin:
$ chmod +x virtctl
$ sudo install virtctl /usr/local/bin
Now, we should be able to use virtctl with our working deployment.
4. KubeVirt Demo
To demonstrate how KubeVirt works, let’s leverage it to quickly bring up a virtual machine.
4.1. Create VM Manifest
To begin with, we create a fairly simple VM manifest:
$ cat xvm.yaml
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: xvm
spec:
running: false
template:
metadata:
labels:
kubevirt.io/size: small
kubevirt.io/domain: xvm
spec:
domain:
devices:
disks:
- name: containerdisk
disk:
bus: virtio
- name: cloudinitdisk
disk:
bus: virtio
interfaces:
- name: default
bridge: {}
resources:
requests:
memory: 64M
networks:
- name: default
pod: {}
volumes:
- name: containerdisk
containerDisk:
image: quay.io/kubevirt/cirros-container-disk-demo
- name: cloudinitdisk
cloudInitNoCloud:
userDataBase64: SGlrcyBHZXJnYW5vdlxu
The manifest kind is VirtualMachine, the new custom resource type. Next, we have the name as metadata. After that, we specify two disk devices and one bridge network interface. Finally, we only assign 64M of memory and use the default network.
Importantly, this VM uses a so-called container disk, which is similar to a container image in the world of KubeVirt virtual machines. As such, it doesn’t persist changes but enables fairly easy pulls from a registry such as quay.io/kubevirt/cirros-container-disk-demo.
4.2. Deploy VM
Now, *we can deploy the xvm virtual machine similar to a pod, by [apply]ing the manifest*:
$ kubectl apply --file xvm.yaml
virtualmachine.kubevirt.io/xvm created
Next, we check the status of this new VM:
$ kubectl get vms
NAME AGE STATUS READY
xvm 4m Stopped False
$ kubectl get vms --output=yaml xvm
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"kubevirt.io/v1","kind":"VirtualMachine","metadata":{"annotations":{},"name":"xvm","namespace":"default"},"spec":{"running":false,"template":{"metadata":{"labels":{"kubevirt.io/domain":"xvm","kubevirt.io/size":"small"}},"spec":{"domain":{"devices":{"disks":[{"disk":{"bus":"virtio"},"name":"containerdisk"},{"disk":{"bus":"virtio"},"name":"cloudinitdisk"}],"interfaces":[{"masquerade":{},"name":"default"}]},"resources":{"requests":{"memory":"64M"}}},"networks":[{"name":"default","pod":{}}],"volumes":[{"containerDisk":{"image":"quay.io/kubevirt/cirros-container-disk-demo"},"name":"containerdisk"},{"cloudInitNoCloud":{"userDataBase64":"SGlrcyBHZXJnYW5vdlxu"},"name":"cloudinitdisk"}]}}}}
kubevirt.io/latest-observed-api-version: v1
kubevirt.io/storage-observed-api-version: v1
creationTimestamp: "2024-02-15T17:18:08Z"
[...]
status:
conditions:
- lastProbeTime: "2024-02-15T17:18:08Z"
lastTransitionTime: "2024-02-15T17:18:08Z"
message: VMI does not exist
reason: VMINotExists
status: "False"
type: Ready
[...]
As we can see, the VM is Ready, but no instance (VMI) is running.
4.3. Start and Connect to VM
To start a VM, we use virtctl:
$ virtctl start xvm
VM xvm was scheduled to start
In case we don’t have or want to use virtctl, we can also –patch-start (or even patch-stop) an instance:
$ kubectl patch virtualmachine xvm --type merge --patch='{"spec":{"running":true}}'
Now, we can see that an instance is Running:
$ kubectl get vmis
NAME AGE PHASE IP NODENAME READY
xvm 2m Running 10.244.0.16 xost True
In addition, the output shows the IP address in the default Flannel network as well as the node our VMI is running on.
At this stage, we use virtctl to connect to the console of our new xvm VM:
$ virtctl console xvm
In this case, CirrOS is just a proof of concept, so we can log in and exit via ^].
4.4. Manage VM
As expected, we can stop a VM with virtctl:
$ virtctl stop xvm
Finally, to delete a virtual machine, we use the delete subcommand of kubectl:
$ kubectl delete vm xvm
Here, we use the custom vm type. Alternatively, we can leverage delete with the original manifest –filename and contents.
5. Summary
In this article, we delved into the KubeVirt addon to Kubernetes.
In conclusion, since virtualization is still part of many processes despite the advancements of containerization, integrating VM functionality within Kubernetes can be beneficial for streamlining processes.