$ export KMP_NAMESPACE="$(oc get pod -A --no-headers -l control-plane=mac-controller-manager | awk '{print $1}')"
OpenShift Virtualization has alerts that inform you when a problem occurs. Critical alerts require immediate attention.
Each alert has a corresponding description of the problem, a reason for why the alert is occurring, a troubleshooting process to diagnose the source of the problem, and steps for resolving the alert.
Network alerts provide information about problems for the OpenShift Virtualization Network Operator.
The KubeMacPool component allocates MAC addresses and prevents MAC address conflicts.
If the KubeMacPool-manager pod is down, then the creation of VirtualMachine
objects fails.
Determine the Kubemacpool-manager pod namespace and name.
$ export KMP_NAMESPACE="$(oc get pod -A --no-headers -l control-plane=mac-controller-manager | awk '{print $1}')"
$ export KMP_NAME="$(oc get pod -A --no-headers -l control-plane=mac-controller-manager | awk '{print $2}')"
Check the Kubemacpool-manager pod description and logs to determine the source of the problem.
$ oc describe pod -n $KMP_NAMESPACE $KMP_NAME
$ oc logs -n $KMP_NAMESPACE $KMP_NAME
Open a support issue and provide the information gathered in the troubleshooting process.
SSP alerts provide information about problems for the OpenShift Virtualization SSP Operator.
The SSP Operator’s pod is up, but the pod’s reconcile cycle consistently fails. This failure includes failure to update the resources for which it is responsible, failure to deploy the template validator, or failure to deploy or update the common templates.
If the SSP Operator fails to reconcile, then the deployment of dependent components fails, reconciliation of component changes fails, or both. Additionally, the updates to the common templates and template validator reset and fail.
Check the ssp-operator pod’s logs for errors:
$ export NAMESPACE="$(oc get deployment -A | grep ssp-operator | awk '{print $1}')"
$ oc -n $NAMESPACE describe pods -l control-plane=ssp-operator
$ oc -n $NAMESPACE logs --tail=-1 -l control-plane=ssp-operator
Verify that the template validator is up. If the template validator is not up, then check the pod’s logs for errors.
$ export NAMESPACE="$($ oc get deployment -A | grep ssp-operator | awk '{print $1}')"
$ oc -n $NAMESPACE get pods -l name=virt-template-validator
$ oc -n $NAMESPACE describe pods -l name=virt-template-validator
$ oc -n $NAMESPACE logs --tail=-1 -l name=virt-template-validator
Open a support issue and provide the information gathered in the troubleshooting process.
The SSP Operator deploys and reconciles the common templates and the template validator.
If the SSP Operator is down, then the deployment of dependent components fails, reconciliation of component changes fails, or both. Additionally, the updates to the common template and template validator reset and fail.
Check ssp-operator’s pod namespace:
$ export NAMESPACE="$(oc get deployment -A | grep ssp-operator | awk '{print $1}')"
Verify that the ssp-operator’s pod is currently down.
$ oc -n $NAMESPACE get pods -l control-plane=ssp-operator
Check the ssp-operator’s pod description and logs.
$ oc -n $NAMESPACE describe pods -l control-plane=ssp-operator
$ oc -n $NAMESPACE logs --tail=-1 -l control-plane=ssp-operator
Open a support issue and provide the information gathered in the troubleshooting process.
The template validator validates that virtual machines (VMs) do not violate their assigned templates.
If every template validator pod is down, then the template validator fails to validate VMs against their assigned templates.
Check the namespaces of the ssp-operator pods and the virt-template-validator pods.
$ export NAMESPACE_SSP="$(oc get deployment -A | grep ssp-operator | awk '{print $1}')"
$ export NAMESPACE="$(oc get deployment -A | grep virt-template-validator | awk '{print $1}')"
Verify that the virt-template-validator’s pod is currently down.
$ oc -n $NAMESPACE get pods -l name=virt-template-validator
Check the pod description and logs of the ssp-operator and the virt-template-validator.
$ oc -n $NAMESPACE_SSP describe pods -l name=ssp-operator
$ oc -n $NAMESPACE_SSP logs --tail=-1 -l name=ssp-operator
$ oc -n $NAMESPACE describe pods -l name=virt-template-validator
$ oc -n $NAMESPACE logs --tail=-1 -l name=virt-template-validator
Open a support issue and provide the information gathered in the troubleshooting process.
Virt alerts provide information about problems for the OpenShift Virtualization Virt Operator.
In the past 10 minutes, no virt-operator pod holds the leader lease, despite one or more virt-operator pods being in Ready
state. The alert suggests no operating virt-operator pod exists.
The virt-operator is the first Kubernetes Operator active in a OpenShift Container Platform cluster. Its primary responsibilities are:
Installation
Live-update
Live-upgrade of a cluster
Monitoring the lifecycle of top-level controllers such as virt-controller, virt-handler, and virt-launcher
Managing the reconciliation of top-level controllers
In addition, the virt-operator is responsible for cluster-wide tasks such as certificate rotation and some infrastructure management.
The virt-operator deployment has a default replica of two pods with one leader pod holding a leader lease, indicating an operating virt-operator pod.
This alert indicates a failure at the cluster level. Critical cluster-wide management functionalities such as certification rotation, upgrade, and reconciliation of controllers may be temporarily unavailable.
Determine a virt-operator pod’s leader status from the pod logs. The log messages containing Started leading
and acquire leader
indicate the leader status of a given virt-operator pod.
Additionally, always check if there are any running virt-operator pods and the pods' statuses with these commands:
$ export NAMESPACE="$(oc get kubevirt -A -o custom-columns="":.metadata.namespace)"
$ oc -n $NAMESPACE get pods -l kubevirt.io=virt-operator
$ oc -n $NAMESPACE logs <pod-name>
$ oc -n $NAMESPACE describe pod <pod-name>
Leader pod example:
$ oc -n $NAMESPACE logs <pod-name> |grep lead
{"component":"virt-operator","level":"info","msg":"Attempting to acquire leader status","pos":"application.go:400","timestamp":"2021-11-30T12:15:18.635387Z"}
I1130 12:15:18.635452 1 leaderelection.go:243] attempting to acquire leader lease <namespace>/virt-operator...
I1130 12:15:19.216582 1 leaderelection.go:253] successfully acquired lease <namespace>/virt-operator
{"component":"virt-operator","level":"info","msg":"Started leading","pos":"application.go:385","timestamp":"2021-11-30T12:15:19.216836Z"}
Non-leader pod example:
$ oc -n $NAMESPACE logs <pod-name> |grep lead
{"component":"virt-operator","level":"info","msg":"Attempting to acquire leader status","pos":"application.go:400","timestamp":"2021-11-30T12:15:20.533696Z"}
I1130 12:15:20.533792 1 leaderelection.go:243] attempting to acquire leader lease <namespace>/virt-operator...
There are several reasons for no virt-operator pod holding the leader lease, despite one or more virt-operator pods being in Ready
state. Identify the root cause and take appropriate action.
Otherwise, open a support issue and provide the information gathered in the troubleshooting process.
The virt-controller monitors virtual machine instances (VMIs). The virt-controller also manages the associated pods by creating and managing the lifecycle of the pods associated with the VMI objects.
A VMI object always associates with a pod during its lifetime. However, the pod instance can change over time because of VMI migration.
This alert occurs when detection of no ready virt-controllers occurs for five minutes.
If the virt-controller fails, then VM lifecycle management completely fails. Lifecycle management tasks include launching a new VMI or shutting down an existing VMI.
Check the vdeployment status of the virt-controller for available replicas and conditions.
$ oc -n $NAMESPACE get deployment virt-controller -o yaml
Check if the virt-controller pods exist and check their statuses.
get pods -n $NAMESPACE |grep virt-controller
Check the virt-controller pods' events.
$ oc -n $NAMESPACE describe pods <virt-controller pod>
Check the virt-controller pods' logs.
$ oc -n $NAMESPACE logs <virt-controller pod>
Check if there are issues with the nodes, such as if the nodes are in a NotReady
state.
$ oc get nodes
There are several reasons for no virt-controller pods being in a Ready
state. Identify the root cause and take appropriate action.
Otherwise, open a support issue and provide the information gathered in the troubleshooting process.
No detection of a virt-operator pod in the Ready
state occurs in the past 10 minutes. The virt-operator deployment has a default replica of two pods.
The virt-operator is the first Kubernetes Operator active in an OpenShift Container Platform cluster. Its primary responsibilities are:
Installation
Live-update
Live-upgrade of a cluster
Monitoring the lifecycle of top-level controllers such as virt-controller, virt-handler, and virt-launcher
Managing the reconciliation of top-level controllers
In addition, the virt-operator is responsible for cluster-wide tasks such as certificate rotation and some infrastructure management.
Virt-operator is not directly responsible for virtual machines in the cluster. Virt-operator’s unavailability does not affect the custom workloads. |
This alert indicates a failure at the cluster level. Critical cluster-wide management functionalities such as certification rotation, upgrade, and reconciliation of controllers are temporarily unavailable.
Check the deployment status of the virt-operator for available replicas and conditions.
$ oc -n $NAMESPACE get deployment virt-operator -o yaml
Check the virt-controller pods' events.
$ oc -n $NAMESPACE describe pods <virt-operator pod>
Check the virt-operator pods' logs.
$ oc -n $NAMESPACE logs <virt-operator pod>
Check if there are issues with the nodes for the control plane and masters, such as if they are in a NotReady
state.
$ oc get nodes
There are several reasons for no virt-operator pods being in a Ready
state. Identify the root cause and take appropriate action.
Otherwise, open a support issue and provide the information gathered in the troubleshooting process.
All OpenShift Container Platform API servers are down.
If all OpenShift Container Platform API servers are down, then no API calls for OpenShift Container Platform entities occur.
Modify the environment variable NAMESPACE
.
$ export NAMESPACE="$(oc get kubevirt -A -o custom-columns="":.metadata.namespace)"
Verify if there are any running virt-api pods.
$ oc -n $NAMESPACE get pods -l kubevirt.io=virt-api
View the pods' logs using oc logs
and the pods' statuses using oc describe
.
Check the status of the virt-api deployment. Use these commands to learn about related events and show if there are any issues with pulling an image, a crashing pod, or other similar problems.
$ oc -n $NAMESPACE get deployment virt-api -o yaml
$ oc -n $NAMESPACE describe deployment virt-api
Check if there are issues with the nodes, such as if the nodes are in a NotReady
state.
$ oc get nodes
Virt-api pods can be down for several reasons. Identify the root cause and take appropriate action.
Otherwise, open a support issue and provide the information gathered in the troubleshooting process.
More than 80% of the REST calls fail in virt-api in the last five minutes.
A very high rate of failed REST calls to virt-api causes slow response, slow execution of API calls, or even complete dismissal of API calls.
Modify the environment variable NAMESPACE
.
$ export NAMESPACE="$(oc get kubevirt -A -o custom-columns="":.metadata.namespace)"
Check to see how many running virt-api pods exist.
$ oc -n $NAMESPACE get pods -l kubevirt.io=virt-api
View the pods' logs using oc logs
and the pods' statuses using oc describe
.
Check the status of the virt-api deployment to find out more information. These commands provide the associated events and show if there are any issues with pulling an image or a crashing pod.
$ oc -n $NAMESPACE get deployment virt-api -o yaml
$ oc -n $NAMESPACE describe deployment virt-api
Check if there are issues with the nodes, such as if the nodes are overloaded or not in a NotReady
state.
$ oc get nodes
There are several reasons for a high rate of failed REST calls. Identify the root cause and take appropriate action.
Node resource exhaustion
Not enough memory on the cluster
Nodes are down
The API server overloads, such as when the scheduler is not 100% available)
Networking issues
Otherwise, open a support issue and provide the information gathered in the troubleshooting process.
If no detection of virt-controllers occurs in the past five minutes, then virt-controller deployment has a default replica of two pods.
If the virt-controller fails, then VM lifecycle management tasks, such as launching a new VMI or shutting down an existing VMI, completely fail.
Modify the environment variable NAMESPACE
.
$ export NAMESPACE="$(oc get kubevirt -A -o custom-columns="":.metadata.namespace)"
Check the status of the virt-controller deployment.
$ oc get deployment -n $NAMESPACE virt-controller -o yaml
Check the virt-controller pods' events.
$ oc -n $NAMESPACE describe pods <virt-controller pod>
Check the virt-controller pods' logs.
$ oc -n $NAMESPACE logs <virt-controller pod>
Check the manager pod’s logs to determine why creating the virt-controller pods fails.
$ oc get logs <virt-controller-pod>
An example of a virt-controller pod name in the logs is virt-controller-7888c64d66-dzc9p
. However, there may be several pods that run virt-controller.
There are several known reasons why the detection of no running virt-controller occurs. Identify the root cause from the list of possible reasons and take appropriate action.
Node resource exhaustion
Not enough memory on the cluster
Nodes are down
The API server overloads, such as when the scheduler is not 100% available)
Networking issues
Otherwise, open a support issue and provide the information gathered in the troubleshooting process.
More than 80% of the REST calls failed in virt-controller in the last five minutes.
Virt-controller has potentially fully lost connectivity to the API server. This loss does not affect running workloads, but propagation of status updates and actions like migrations cannot occur.
There are two common error types associated with virt-controller REST call failure:
The API server overloads, causing timeouts. Check the API server metrics and details like response times and overall calls.
The virt-controller pod cannot reach the API server. Common causes are:
DNS issues on the node
Networking connectivity issues
Check the virt-controller logs to determine if the virt-controller pod cannot connect to the API server at all. If so, delete the pod to force a restart.
Additionally, verify if node resource exhaustion or not having enough memory on the cluster is causing the connection failure.
The issue normally relates to DNS or CNI issues outside of the scope of this alert.
Otherwise, open a support issue and provide the information gathered in the troubleshooting process.
More than 80% of the REST calls failed in virt-handler in the last five minutes.
Virt-handler lost the connection to the API server. Running workloads on the affected node still run, but status updates cannot propagate and actions such as migrations cannot occur.
There are two common error types associated with virt-operator REST call failure:
The API server overloads, causing timeouts. Check the API server metrics and details like response times and overall calls.
The virt-operator pod cannot reach the API server. Common causes are:
DNS issues on the node
Networking connectivity issues
If the virt-handler cannot connect to the API server, delete the pod to force a restart. The issue normally relates to DNS or CNI issues outside of the scope of this alert. Identify the root cause and take appropriate action.
Otherwise, open a support issue and provide the information gathered in the troubleshooting process.
This alert occurs when no virt-operator pod is in the Running
state in the past 10 minutes. The virt-operator deployment has a default replica of two pods.
The virt-operator is the first Kubernetes Operator active in an OpenShift Container Platform cluster. Its primary responsibilities are:
Installation
Live-update
Live-upgrade of a cluster
Monitoring the lifecycle of top-level controllers such as virt-controller, virt-handler, and virt-launcher
Managing the reconciliation of top-level controllers
In addition, the virt-operator is responsible for cluster-wide tasks such as certificate rotation and some infrastructure management.
The virt-operator is not directly responsible for virtual machines in the cluster. The virt-operator’s unavailability does not affect the custom workloads. |
This alert indicates a failure at the cluster level. Critical cluster-wide management functionalities such as certification rotation, upgrade, and reconciliation of controllers are temporarily unavailable.
Modify the environment variable NAMESPACE
.
$ export NAMESPACE="$(oc get kubevirt -A -o custom-columns="":.metadata.namespace)"
Check the status of the virt-operator deployment.
$ oc get deployment -n $NAMESPACE virt-operator -o yaml
Check the virt-operator pods' events.
$ oc -n $NAMESPACE describe pods <virt-operator pod>
Check the virt-operator pods' logs.
$ oc -n $NAMESPACE logs <virt-operator pod>
Check the manager pod’s logs to determine why creating the virt-operator pods fails.
$ oc get logs <virt-operator-pod>
An example of a virt-operator pod name in the logs is virt-operator-7888c64d66-dzc9p
. However, there may be several pods that run virt-operator.
There are several known reasons why the detection of no running virt-operator occurs. Identify the root cause from the list of possible reasons and take appropriate action.
Node resource exhaustion
Not enough memory on the cluster
Nodes are down
The API server overloads, such as when the scheduler is not 100% available)
Networking issues
Otherwise, open a support issue and provide the information gathered in the troubleshooting process.
More than 80% of the REST calls failed in virt-operator in the last five minutes.
Virt-operator lost the connection to the API server. Cluster-level actions such as upgrading and controller reconciliation do not function. There is no effect to customer workloads such as VMs and VMIs.
There are two common error types associated with virt-operator REST call failure:
The API server overloads, causing timeouts. Check the API server metrics and details, such as response times and overall calls.
The virt-operator pod cannot reach the API server. Common causes are network connectivity problems and DNS issues on the node. Check the virt-operator logs to verify that the pod can connect to the API server at all.
$ export NAMESPACE="$(oc get kubevirt -A -o custom-columns="":.metadata.namespace)"
$ oc -n $NAMESPACE get pods -l kubevirt.io=virt-operator
$ oc -n $NAMESPACE logs <pod-name>
$ oc -n $NAMESPACE describe pod <pod-name>
If the virt-operator cannot connect to the API server, delete the pod to force a restart. The issue normally relates to DNS or CNI issues outside of the scope of this alert. Identify the root cause and take appropriate action.
Otherwise, open a support issue and provide the information gathered in the troubleshooting process.