This project is an experiment and should not be used with production workloads.
Cloud Hypervisor is an open source Virtual Machine Monitor (VMM) that runs on top of KVM. The project focuses on exclusively running modern, cloud workloads, on top of a limited set of hardware architectures and platforms. Cloud workloads refers to those that are usually run by customers inside a cloud provider. For our purposes this means modern Linux* distributions with most I/O handled by paravirtualised devices (i.e. virtio), no requirement for legacy devices and recent CPUs and KVM.
cloud-hypervisor only supports the
x86-64 CPU architecture for now.
We're planning to add support for the
AArch64 architecture in the future.
Support for modern 64-bit Windows guest is being evaluated.
We create a folder to build and run
$ export CLOUDH=$HOME/cloud-hypervisor $ mkdir $CLOUDH
First you need to clone and build the cloud-hypervisor repo:
$ pushd $CLOUDH $ git clone https://github.com/cloud-hypervisor/cloud-hypervisor.git $ cd cloud-hypervisor $ cargo build --release # We need to give the cloud-hypervisor binary the NET_ADMIN capabilities for it to set TAP interfaces up on the host. $ sudo setcap cap_net_admin+ep ./target/release/cloud-hypervisor $ popd
This will build a
cloud-hypervisor binary under
If you want to build and test Cloud Hypervisor without having to install all the
required dependencies (The rust toolchain, cargo tools, etc), you can also use
Cloud Hypervisor's development script:
dev_cli.sh. Please note that upon its
first invocation, this script will pull a fairly large container image.
For example, to build the Cloud Hypervisor release binary:
$ pushd $CLOUDH $ cd cloud-hypervisor $ ./scripts/dev_cli.sh build --release
dev_cli.sh, one can also run the Cloud Hypervisor CI locally. This can be
very convenient for debugging CI errors without having to fully rely on the
Cloud Hypervisor CI infrastructure.
For example, to run the Cloud Hypervisor unit tests:
$ ./scripts/dev_cli.sh tests --unit
./scripts/dev_cli.sh --help command to view all the supported
development script commands and their related options.
You can run a guest VM by either using an existing cloud image or booting into your own kernel and disk image.
cloud-hypervisor supports booting disk images containing all needed
components to run cloud workloads, a.k.a. cloud images. To do that we rely on
the Rust Hypervisor
Firmware project to provide
formatted KVM firmware for
cloud-hypervisor to directly boot into.
We need to get the latest
rust-hypervisor-firmware release and also a working cloud image. Here we will use a Clear Linux image:
$ pushd $CLOUDH $ wget https://download.clearlinux.org/releases/31890/clear/clear-31890-kvm.img.xz $ unxz clear-31890-kvm.img.xz $ wget https://github.com/cloud-hypervisor/rust-hypervisor-firmware/releases/download/0.2.6/hypervisor-fw $ popd
$ pushd $CLOUDH $ sudo setcap cap_net_admin+ep ./cloud-hypervisor/target/release/cloud-hypervisor $ ./cloud-hypervisor/target/release/cloud-hypervisor \ --kernel ./hypervisor-fw \ --disk path=clear-31890-kvm.img \ --cpus boot=4 \ --memory size=1024M \ --net "tap=,mac=,ip=,mask=" \ --rng $ popd
Multiple arguments can be given to the
cloud-hypervisor also supports direct kernel boot into a
vmlinux ELF kernel
image. In order to support virtio-fs and virtio-iommu we have our own development branch. You are of course able to use your own kernel but these instructions will continue with the version that we develop and test against.
To build the kernel:
# Clone the Cloud Hypervisor Linux branch $ pushd $CLOUDH $ git clone --depth 1 https://github.com/cloud-hypervisor/linux.git -b virtio-fs-virtio-iommu-virtio-mem-5.6-rc4 linux-cloud-hypervisor $ pushd linux-cloud-hypervisor # Use the cloud-hypervisor kernel config to build your kernel $ cp $CLOUDH/cloud-hypervisor/resources/linux-config .config $ make bzImage -j `nproc` $ popd
vmlinux kernel image will then be located at
For the disk image, we will use a Clear Linux cloud image that contains a root partition:
$ pushd $CLOUDH $ wget https://download.clearlinux.org/releases/31890/clear/clear-31890-kvm.img.xz $ unxz clear-31890-kvm.img.xz $ popd
Now we can directly boot into our custom kernel and make it use the Clear Linux root partition. If we want to have 4 vCPUs and 512 MBytes of memory:
$ pushd $CLOUDH $ sudo setcap cap_net_admin+ep ./cloud-hypervisor/target/release/cloud-hypervisor $ ./cloud-hypervisor/target/release/cloud-hypervisor \ --kernel ./linux-cloud-hypervisor/arch/x86/boot/compressed/vmlinux.bin \ --disk path=clear-31890-kvm.img \ --cmdline "console=hvc0 reboot=k panic=1 nomodules i8042.noaux i8042.nomux i8042.nopnp i8042.dumbkbd root=/dev/vda3" \ --cpus boot=4 \ --memory size=1024M \ --net "tap=,mac=,ip=,mask=" \ --rng
The above example use the
virtio-console device as the guest console, and this
device may not be enabled soon enough by the guest kernel to get early kernel
When in need for earlier debug messages, using the legacy serial device based console is preferred:
$ ./cloud-hypervisor/target/release/cloud-hypervisor \ --kernel ./linux-cloud-hypervisor/arch/x86/boot/compressed/vmlinux.bin \ --console off \ --serial tty \ --disk path=clear-31890-kvm.img \ --cmdline "console=ttyS0 reboot=k panic=1 nomodules i8042.noaux i8042.nomux i8042.nopnp i8042.dumbkbd root=/dev/vda3" \ --cpus boot=4 \ --memory size=1024M \ --net "tap=,mac=,ip=,mask=" \ --rng
cloud-hypervisor is in a very early, pre-alpha stage. Use at your own risk!
Direct kernel boot to userspace should work with most rootfs.
This document details how to add devices to a running VM. Currently only CPU hot plug is supported.
Follow this documentation.
We are not tracking the
cloud-hypervisor TODO list from a specific git tracked file but through
github issues instead.
In order to satisfy the design goal of having a high-performance, security-focused hypervisor the decision was made to use the Rust programming language. The language's strong focus on memory and thread safety makes it an ideal candidate for implementing VMMs
Instead of implementing the VMM components from scratch,
cloud-hypervisor is importing the rust-vmm
crates, and sharing code and architecture together with other VMMs like e.g. Amazon's Firecracker
and Google's crosvm.
cloud-hypervisor embraces the rust-vmm project goals, which is to be able to share and re-use
as many virtualization crates as possible. As such, the
cloud-hypervisor relationship with the rust-vmm
project is twofold:
cloud-hypervisorneeds (e.g. ACPI, VFIO, vhost-user, etc), we will be using the
cloud-hypervisorVMM to implement and test them, and contribute them back to the rust-vmm project.
A large part of the
cloud-hypervisor code is based on either the Firecracker or the crosvm projects implementations.
Both of these are VMMs written in Rust with a focus on safety and security, like Cloud Hypervisor.
However we want to emphasize that the Cloud Hypervisor project is neither a fork nor a reimplementation of any of those projects. The goals and use cases we're trying to meet are different. We're aiming at supporting cloud workloads, i.e. those modern, full Linux distribution images currently being run by Cloud Service Provider (CSP) tenants.
Our primary target is not to support client or serverless use cases, and as such our code base already diverges from the crosvm and Firecracker ones. As we add more features to support our use cases, we believe that the divergence will increase while at the same time sharing as much of the fundamental virtualization code through the rust-vmm project crates as possible.
We are working on building a global, diverse and collaborative community around the Cloud Hypervisor project. Anyone who is interested in contributing to the project is welcome to participate.
We believe that contributing to a open source project like Cloud Hypervisor covers a lot more than just sending code. Testing, documentation, pull request reviews, bug reports, feature requests, project improvement suggestions, etc, are all equal and welcome means of contribution. See the CONTRIBUTING document for more details.
Reporting a Potential Security Vulnerability: If you have discovered potential security vulnerability in this project, please send an e-mail to [email protected] For issues related to Intel Products, please visit https://security-center.intel.com.
It is important to include the following details: - The projects and versions affected - Detailed description of the vulnerability - Information on known exploits
Vulnerability information is extremely sensitive. Please encrypt all security vulnerability reports using our PGP key
A member of the Intel Product Security Team will review your e-mail and contact you to to collaborate on resolving the issue. For more information on how Intel works to resolve security issues, see: Vulnerability Handling Guidelines
PGP Key: https://www.intel.com/content/www/us/en/security-center/pgp-public-key.html
Vulnerability Handling Guidelines: https://www.intel.com/content/www/us/en/security-center/vulnerability-handling-guidelines.html