Running KVM and VMware VMs in Container Engine for Kubernetes

September 17, 2021 | 5 minute read
Gilson Melo
Director of Product Management
Text Size 100%:

With the advent of microservices, people commonly ask, "Is it possible to run my legacy virtual machines (VMs) in Kernel-based Virtual Machine (KVM) or VMware with my microservices on Kubernetes, or do I need to migrate them to containers?" One possible answer to that question is KubeVirt.

The KubeVirt project turns Kubernetes into an orchestration engine for application containers and virtual machine workloads. It addresses the needs of development teams that have adopted or want to adopt Kubernetes but have existing VM-based workloads that they can’t easily put in containers. The technology provides a unified development platform in which developers can build, modify, and deploy applications that reside in both application containers and VMs in a common, shared environment.

Note: KubeVirt can be tested on external cloud providers like Oracle Cloud Infrastructure, but this setup is not meant for production yet!

Getting Started: OKE for Containers and VM Workloads

Oracle Cloud Infrastructure Container Engine for Kubernetes (sometimes referred to as OKE) provides a reliable and scalable integrated workflow platform to build, test, deploy, and monitor your code in the cloud. Container Engine for Kubernetes helps you deploy, manage, and scale Kubernetes clusters in the cloud. With it, you can build dynamic containerized applications by incorporating Kubernetes with services running on Oracle Cloud Infrastructure.

KubeVirt can be deployed on Container Engine for Kubernetes worker nodes with bare metal or VM shapes. If your cluster worker nodes are provisioned with VM shapes, KubeVirt runs your legacy KVM or VMware virtual machines in nested mode.


The Oracle Cloud Infrastructure command line interface (CLI) and the Container Engine for Kubernetes kubectl CLI must be installed and configured. For more information, see the Container Engine for Kubernetes documentation. You can also use Cloud Shell, a browser-based terminal accessible from the Oracle Cloud Infrastructure Console. The CLIs are preinstalled and configured in Cloud Shell.


Follow these steps to deploy KubeVirt with Container Engine for Kubernetes.

Step 1: Deploy the KubeVirt Operator

  1. Run the Container Engine for Kubernetes cluster and NodePool nodes.

  2. Set the version environment variable to be used on commands:

    export KUBEVIRT_VERSION=$(curl -s | grep tag_name | grep -v -- - | sort -V | tail -1 | awk -F':' '{print $2}' | sed 's/,//' | xargs)
  3. Using the kubectl tool, deploy the KubeVirt operator:

    kubectl create -f${KUBEVIRT_VERSION}/kubevirt-operator.yaml
  4. Ensure that the KubeVirt operator is running:

    [root@kvmvbox ~]# kubectl get pods -n kubevirt
    virt-operator-6b5455546b-56dvx     1/1     Running   0          39h
    virt-operator-6b5455546b-5pwdj     1/1     Running   0          39h
  5. Check for the virtualization extensions. When you use Oracle Cloud Infrastructure VM shapes with Oracle Linux images, the shapes should have nested virtualization enabled by default, and the cpuinfo file should have the VMX flag. Check it with the egrep command.

    egrep 'svm|vmx' /proc/cpuinfo

Step 2: Deploy KubeVirt

  1. Deploy KubeVirt by creating a dedicated custom resource:

    kubectl create -f${KUBEVIRT_VERSION}/kubevirt-cr.yaml
  2. Check the deployment:

    Screenshot that shows the KubeVirt deployment running.

Step 3: Install virtctl

Another binary gives quick access to the serial and graphical ports of a VM and handles start and stop operations. The tool is called virtctl, and you can retrieve it from the release page of KubeVirt:

curl -L -o virtctl \${KUBEVIRT_VERSION}/virtctl-${KUBEVIRT_VERSION}-linux-amd64
chmod +x virtctl

Step 4: Test KubeVirt

Now it’s time to test KubeVirt with Container Engine for Kubernetes by running legacy KVM or VMware VMs, along with containers. KubeVirt provides some labs that let you test it before trying legacy KVM or VMware VMs.

The first lab, Use KubeVirt, walks you through the creation of a virtual machine instance on Kubernetes, and then shows you how to use virtctl to interact with its console.

Screenshot that shows the virtctl test run on the console.

The second lab, Experiment with the Containerized Data Importer (CDI), shows you how to use the CDI to import a VM image into a Persistent Volume Claim (PVC), and then how to define a VM to use the PVC.

Running KVM and VMware VMs in Container Engine

If you’re planning to run a KVM or VMware VM in Container Engine for Kubernetes, you must first convert the disks to a raw format. Two free utilities can help you do that: Oracle VirtualBox VBoxManage and QEMU disk image utility.

  • VBoxManage is the CLI for Oracle VM VirtualBox. You can use it to control Oracle VM VirtualBox from the command line of your host OS. VBoxManage exposes all the features of the virtualization engine, and it lets you convert disks into different formats.

    Use the following command in the CLI to convert VM disks to a raw format:

    VBoxManage clonehd --format RAW kvm_qcow2_OR_VMware_vmdk_disk disk-name.img
  • QEMU disk image utility, known as qemu-img, also lets you convert disks into other formats. To install it on a machine running Oracle Linux, for example, run the following command:

    sudo yum install qemu-img

    Then, run this command to convert VM disks to a raw format:

    qemu-img convert kvm_qcow2_OR_VMware_vmdk_disk -O raw disk-name.img

After the disks are converted, you can make them available to be used in Container Engine for Kubernetes. You have a few options:

  • Upload the disk into the worker nodes and running it with hostpath.

  • Create a Docker image of the raw disk and upload it into a public registry like Oracle Cloud Infrastructure Registry.

  • Clone a disk and create a persistent volume claim with it.

All of these options are explained in the KubeVirt GitHub repo and KubeVirt documentation.


Here’s a quick example of KubeVirt in action with Container Engine for Kubernetes. In this demo, we use a Microsoft Windows 2012 KVM VM image that was converted to a raw format and uploaded into a worker node. Now it’s running with other native Nginx containers in the same pod subnet CIDR. The Windows OS can now access the Nginx web page by using the internal load balancer IP address. So, the Windows KVM VM is behaving like a Kubernetes pod, which allows it to interact natively with other Container Engine for Kubernetes services.

Screenshot that shows a Windows KubeVirt KVM VM running in Container Engine for Kubernetes.

Gilson Melo

Director of Product Management

Previous Post

Running NVIDIA Clara Parabricks Pipelines on Oracle Cloud Infrastructure

Gloria Lee | 4 min read

Next Post

Capgemini Weighs In: How Industries and Customers Differentiate with Cloud

Shanelle Thadani | 3 min read