Businessman using digital tablet with cloud technology icon

Which open source tool is best to run VMs in a cloud native environment?

If you’re like many IT professionals today, you want to go cloud native† But you have legacy workloads, like monoliths, that only run on virtual machines.

You can maintain separate environments for your cloud-native workloads and your legacy workloads. But wouldn’t it be better if you could find a way to integrate the VMs into your cloud-native setup so you can manage them seamlessly alongside your containers?

Fortunately there is. This article discusses four open source solutions for running VMs in a cloud-native environment, with minimal reconfiguration or customization.

Why run VMs in cloud native environments?

Before we look at the tools, let’s take a look at why it’s important to be able to run VMs in an environment that otherwise consists of containerized, loosely coupled, cloud-native workloads.

The main reason is simple: VMs hosting legacy workloads aren’t going away, but it’s a burden to maintain separate hosting environments to run them.

Meanwhile, transforming your legacy workloads to meet cloud-native standards may not be an option. Although in a perfect world you would have the time and technical resources to… refactor your old workloads so they can run natively in a cloud-native environment, which is not always possible in the real world.

So you need tools, such as one of the four open source solutions described below, that allow legacy VM workloads to coexist peacefully with cloud-native workloads.

1. Run VMs with KubeVirt

Probably the most popular solution for deploying virtual machines in a cloud native environment is: KubeVirt

KubeVirt works by running virtual machines in Kubernetes Pods. To run a virtual machine alongside containers, simply install KubeVirt into an existing Kubernetes cluster with:

export RELEASE=v0.35.0
# Deploy the KubeVirt operator
kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-operator.yaml
# Create the KubeVirt CR (instance deployment request) which triggers the actual installation
kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-cr.yaml
# wait until all KubeVirt components are up
kubectl -n kubevirt wait kv kubevirt --for condition=Available

You then create and apply a YAML file that describes each of the virtual machines you want to run. KubeVirt runs each machine in a container, so from a Kubernetes perspective, the VM is just a regular pod (with a few limitations, discussed in the next section). However, you still get a VM image, persistent storage, and fixed CPU and memory allocations, just like you would a conventional VM.

What this means is that KubeVirt essentially requires no changes to your VM. All you need to do is install KubeVirt and create deployments for your VMs to run them as pods.

2. The Virtlet Approach

If you really want to commit to treating VMs like pods, you might like that Virtualtan open source tool from Mirantis.

Virtlet is similar to KubeVirt in that Virtlet also allows you to run VMs in Kubernetes Pods. However, the main difference between these two tools is that Virtlet offers an even deeper integration of VMs into the Kubernetes Pod specification. This means you can do things with Virtlet, like manage VMs as part of DaemonSets or ReplicaSets, which you can’t do natively with KubeVirt. (KubeVirt has similar features, but they are add-ons rather than native parts of Kubernetes.)

Mirantis also says Virtlet usually offers better network performance than KubeVirt, though it’s hard to know definitively because there are so many variables involved in network configuration.

3. Istio Support for VMs

what if you do not want to manage your VMs as if they were containers? What if you want to treat them as VMs, while still allowing them to easily integrate with? microservices

Probably the best solution is to connect your VMs to Istio, the open-source service mesh. With this approach, you can deploy and manage VMs using standard VM tooling, while still managing networking, load balancing, etc. through Istio.

Unfortunately, the process for connecting VMs to Istio is relatively cumbersome and difficult to automate at the moment. It comes down to installing Istio on each of the VMs you want to connect, configuring a namespace for them, and then connecting each VM to Istio. For a complete overview of the Istio VM integration processview the documentation.

4. Containers and VMs side by side with OpenStack

The techniques we’ve looked at so far are taking cloud native platforms like Kubernetes or Istio and adding VM support to them.

An alternative approach is to take a non-cloud native platform that allows you to run VMs and then graft cloud native tooling onto them.

That’s what you get when you run VMs and containers together on OpenStack† OpenStack was originally designed as a way to deploy VMs (among other types of resources) to build a private cloud. But OpenStack can now also host Kubernetes

So you can use OpenStack to deploy and manage VMs, while simultaneously running cloud-native, containerized workloads on OpenStack via Kubernetes. You’d end up with two layers of orchestration – the underlying OpenStack installation and then the Kubernetes environment – so this approach is more complex from an administrative standpoint.

However, the main advantage is that you can keep your VMs and containers relatively separate from each other, because the VMs are not part of Kubernetes. Also, you wouldn’t limit yourself to Kubernetes tooling for managing the VMs. You can treat your VMs as standard VMs, while treating containers as standard containers.

Conclusion

The open-source ecosystem offers a number of approaches to help VMs coexist with cloud-native workloads. The best solution for you will depend on whether you want to take a Kubernetes-centric approach (in which case KubeVirt or Virtlet is the way to go), or whether you want your VMs to coexist with containers without being closely integrated with them ( in which case OpenStack makes the most sense). And if you only want integration at the network level, but not at the orchestration level, consider connecting VMs to an Istio service mesh.

About the author

Headshot by Christopher TozziChristopher Tozzi is a technology analyst with content expertise in cloud computing, application development, open source software, virtualization, containers, and more. He also teaches at a major university in the Albany, New York area. His book, “For Fun and Profit: A History of the Free and Open Source Software Revolution,” was published by MIT Press.

Leave a Comment

Your email address will not be published. Required fields are marked *