apiVersion: kubevirt.io/v1
kind: VM
spec:
domain:
devices:
networkInterfaceMultiqueue: true
Use multi-queue functionality to scale network throughput and performance on virtual machines (VMs) with multiple vCPUs.
By default, the queueCount
value, which is derived from the domain XML, is determined by the number of vCPUs allocated to a VM. Network performance does not scale as the number of vCPUs increases. Additionally, because virtio-net has only one Tx and Rx queue, guests cannot transmit or retrieve packs in parallel.
Enabling virtio-net multiqueue does not offer significant improvements when the number of vNICs in a guest instance is proportional to the number of vCPUs. |
MSI vectors are still consumed if virtio-net multiqueue is enabled in the host but not enabled in the guest operating system by the administrator.
Each virtio-net queue consumes 64 KiB of kernel memory for the vhost driver.
Starting a VM with more than 16 CPUs results in no connectivity if networkInterfaceMultiqueue
is set to 'true' (CNV-16107).
Enable multi-queue functionality for interfaces configured with a VirtIO model.
Set the networkInterfaceMultiqueue
value to true
in the VirtualMachine
manifest file of your VM to enable multi-queue functionality:
apiVersion: kubevirt.io/v1
kind: VM
spec:
domain:
devices:
networkInterfaceMultiqueue: true
Save the VirtualMachine
manifest file to apply your changes.