×

OpenShift Virtualization provides metrics for monitoring how infrastructure resources are consumed in the cluster. The metrics cover the following resources:

  • vCPU

  • Network

  • Storage

  • Guest memory swapping

Use the OpenShift Container Platform monitoring dashboard to query virtualization metrics.

Prerequisites

  • To use the vCPU metric, the schedstats=enable kernel argument must be applied to the MachineConfig object. This kernel argument enables scheduler statistics used for debugging and performance tuning and adds a minor additional load to the scheduler. See the OpenShift Container Platform machine configuration tasks documentation for more information on applying a kernel argument.

  • For guest memory swapping queries to return data, memory swapping must be enabled on the virtual guests.

About querying metrics

The OpenShift Container Platform monitoring dashboard enables you to run Prometheus Query Language (PromQL) queries to examine metrics visualized on a plot. This functionality provides information about the state of a cluster and any user-defined workloads that you are monitoring.

As a cluster administrator, you can query metrics for all core OpenShift Container Platform and user-defined projects.

As a developer, you must specify a project name when querying metrics. You must have the required privileges to view metrics for the selected project.

Querying metrics for all projects as a cluster administrator

As a cluster administrator or as a user with view permissions for all projects, you can access metrics for all default OpenShift Container Platform and user-defined projects in the Metrics UI.

Prerequisites
  • You have access to the cluster as a user with the cluster-admin cluster role or with view permissions for all projects.

  • You have installed the OpenShift CLI (oc).

Procedure
  1. Select the Administrator perspective in the OpenShift Container Platform web console.

  2. Select ObserveMetrics.

  3. Select Insert Metric at Cursor to view a list of predefined queries.

  4. To create a custom query, add your Prometheus Query Language (PromQL) query to the Expression field.

    As you type a PromQL expression, autocomplete suggestions appear in a drop-down list. These suggestions include functions, metrics, labels, and time tokens. You can use the keyboard arrows to select one of these suggested items and then press Enter to add the item to your expression. You can also move your mouse pointer over a suggested item to view a brief description of that item.

  5. To add multiple queries, select Add Query.

  6. To duplicate an existing query, select kebab next to the query, then choose Duplicate query.

  7. To delete a query, select kebab next to the query, then choose Delete query.

  8. To disable a query from being run, select kebab next to the query and choose Disable query.

  9. To run queries that you created, select Run Queries. The metrics from the queries are visualized on the plot. If a query is invalid, the UI shows an error message.

    Queries that operate on large amounts of data might time out or overload the browser when drawing time series graphs. To avoid this, select Hide graph and calibrate your query using only the metrics table. Then, after finding a feasible query, enable the plot to draw the graphs.

  10. Optional: The page URL now contains the queries you ran. To use this set of queries again in the future, save this URL.

Querying metrics for user-defined projects as a developer

You can access metrics for a user-defined project as a developer or as a user with view permissions for the project.

In the Developer perspective, the Metrics UI includes some predefined CPU, memory, bandwidth, and network packet queries for the selected project. You can also run custom Prometheus Query Language (PromQL) queries for CPU, memory, bandwidth, network packet and application metrics for the project.

Developers can only use the Developer perspective and not the Administrator perspective. As a developer, you can only query metrics for one project at a time in the Observe -→ Metrics page in the web console for your user-defined project.

Prerequisites
  • You have access to the cluster as a developer or as a user with view permissions for the project that you are viewing metrics for.

  • You have enabled monitoring for user-defined projects.

  • You have deployed a service in a user-defined project.

  • You have created a ServiceMonitor custom resource definition (CRD) for the service to define how the service is monitored.

Procedure
  1. Select the Developer perspective in the OpenShift Container Platform web console.

  2. Select ObserveMetrics.

  3. Select the project that you want to view metrics for in the Project: list.

  4. Select a query from the Select query list, or create a custom PromQL query based on the selected query by selecting Show PromQL.

  5. Optional: Select Custom query from the Select query list to enter a new query. As you type, autocomplete suggestions appear in a drop-down list. These suggestions include functions and metrics. Click a suggested item to select it.

    In the Developer perspective, you can only run one query at a time.

Virtualization metrics

The following metric descriptions include example Prometheus Query Language (PromQL) queries. These metrics are not an API and might change between versions.

The following examples use topk queries that specify a time period. If virtual machines are deleted during that time period, they can still appear in the query output.

vCPU metrics

The following query can identify virtual machines that are waiting for Input/Output (I/O):

kubevirt_vmi_vcpu_wait_seconds

Returns the wait time (in seconds) for a virtual machine’s vCPU.

A value above '0' means that the vCPU wants to run, but the host scheduler cannot run it yet. This inability to run indicates that there is an issue with I/O.

To query the vCPU metric, the schedstats=enable kernel argument must first be applied to the MachineConfig object. This kernel argument enables scheduler statistics used for debugging and performance tuning and adds a minor additional load to the scheduler.

Example vCPU wait time query
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_vcpu_wait_seconds[6m]))) > 0 (1)
1 This query returns the top 3 VMs waiting for I/O at every given moment over a six-minute time period.

Network metrics

The following queries can identify virtual machines that are saturating the network:

kubevirt_vmi_network_receive_bytes_total

Returns the total amount of traffic received (in bytes) on the virtual machine’s network.

kubevirt_vmi_network_transmit_bytes_total

Returns the total amount of traffic transmitted (in bytes) on the virtual machine’s network.

Example network traffic query
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_network_receive_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_network_transmit_bytes_total[6m]))) > 0 (1)
1 This query returns the top 3 VMs transmitting the most network traffic at every given moment over a six-minute time period.

Storage metrics

Storage-related traffic

The following queries can identify VMs that are writing large amounts of data:

kubevirt_vmi_storage_read_traffic_bytes_total

Returns the total amount (in bytes) of the virtual machine’s storage-related traffic.

kubevirt_vmi_storage_write_traffic_bytes_total

Returns the total amount of storage writes (in bytes) of the virtual machine’s storage-related traffic.

Example storage-related traffic query
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_read_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_write_traffic_bytes_total[6m]))) > 0 (1)
1 This query returns the top 3 VMs performing the most storage traffic at every given moment over a six-minute time period.

Storage snapshot data

kubevirt_vmsnapshot_disks_restored_from_source_total

Returns the total number of virtual machine disks restored from the source virtual machine.

kubevirt_vmsnapshot_disks_restored_from_source_bytes

Returns the amount of space in bytes restored from the source virtual machine.

Examples of storage snapshot data queries
kubevirt_vmsnapshot_disks_restored_from_source_total{vm_name="simple-vm", vm_namespace="default"} (1)
1 This query returns the total number of virtual machine disks restored from the source virtual machine.
kubevirt_vmsnapshot_disks_restored_from_source_bytes{vm_name="simple-vm", vm_namespace="default"} (1)
1 This query returns the amount of space in bytes restored from the source virtual machine.

I/O performance

The following queries can determine the I/O performance of storage devices:

kubevirt_vmi_storage_iops_read_total

Returns the amount of write I/O operations the virtual machine is performing per second.

kubevirt_vmi_storage_iops_write_total

Returns the amount of read I/O operations the virtual machine is performing per second.

Example I/O performance query
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_read_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_write_total[6m]))) > 0 (1)
1 This query returns the top 3 VMs performing the most I/O operations per second at every given moment over a six-minute time period.

Guest memory swapping metrics

The following queries can identify which swap-enabled guests are performing the most memory swapping:

kubevirt_vmi_memory_swap_in_traffic_bytes_total

Returns the total amount (in bytes) of memory the virtual guest is swapping in.

kubevirt_vmi_memory_swap_out_traffic_bytes_total

Returns the total amount (in bytes) of memory the virtual guest is swapping out.

Example memory swapping query
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_in_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_out_traffic_bytes_total[6m]))) > 0 (1)
1 This query returns the top 3 VMs where the guest is performing the most memory swapping at every given moment over a six-minute time period.

Memory swapping indicates that the virtual machine is under memory pressure. Increasing the memory allocation of the virtual machine can mitigate this issue.

Additional resources