Logging alerts are installed as part of the Red Hat OpenShift Logging Operator installation. Alerts depend on metrics exported by the log collection and log storage backends. These metrics are enabled if you selected the option to Enable operator recommended cluster monitoring on this namespace when installing the Red Hat OpenShift Logging Operator. For more information about installing logging Operators, see Installing logging using the web console.
Default logging alerts are sent to the OpenShift Container Platform monitoring stack Alertmanager in the openshift-monitoring
namespace, unless you have disabled the local Alertmanager instance.
The Alerting UI is accessible through the Administrator perspective and the Developer perspective of the OpenShift Container Platform web console.
In the Administrator perspective, go to Observe → Alerting. The three main pages in the Alerting UI in this perspective are the Alerts, Silences, and Alerting rules pages.
In the Developer perspective, go to Observe → <project_name> → Alerts. In this perspective, alerts, silences, and alerting rules are all managed from the Alerts page. The results shown in the Alerts page are specific to the selected project.
In the Developer perspective, you can select from core OpenShift Container Platform and user-defined projects that you have access to in the Project: <project_name> list. However, alerts, silences, and alerting rules relating to core OpenShift Container Platform projects are not displayed if you do not have |
In logging 5.8 and later versions, the following alerts are generated by the Red Hat OpenShift Logging Operator. You can view these alerts in the OpenShift Container Platform web console.
Alert Name | Message | Description | Severity |
---|---|---|---|
CollectorNodeDown |
Prometheus could not scrape |
Collector cannot be scraped. |
Critical |
CollectorHighErrorRate |
|
|
Critical |
CollectorVeryHighErrorRate |
|
|
Critical |
In logging 5.7 and later versions, the following alerts are generated by the Vector collector. You can view these alerts in the OpenShift Container Platform web console.
Alert | Message | Description | Severity |
---|---|---|---|
|
|
The number of vector output errors is high, by default more than 10 in the previous 15 minutes. |
Warning |
|
|
Vector is reporting that Prometheus could not scrape a specific Vector instance. |
Critical |
|
|
The number of Vector component errors are very high, by default more than 25 in the previous 15 minutes. |
Critical |
|
|
Fluentd is reporting that the queue size is increasing. |
Warning |
The following alerts are generated by the legacy Fluentd log collector. You can view these alerts in the OpenShift Container Platform web console.
Alert | Message | Description | Severity |
---|---|---|---|
|
|
The number of FluentD output errors is high, by default more than 10 in the previous 15 minutes. |
Warning |
|
|
Fluentd is reporting that Prometheus could not scrape a specific Fluentd instance. |
Critical |
|
|
Fluentd is reporting that the queue size is increasing. |
Warning |
|
|
The number of FluentD output errors is very high, by default more than 25 in the previous 15 minutes. |
Critical |
You can view these alerting rules in the OpenShift Container Platform web console.
Alert | Description | Severity |
---|---|---|
|
The cluster health status has been RED for at least 2 minutes. The cluster does not accept writes, shards may be missing, or the master node hasn’t been elected yet. |
Critical |
|
The cluster health status has been YELLOW for at least 20 minutes. Some shard replicas are not allocated. |
Warning |
|
The cluster is expected to be out of disk space within the next 6 hours. |
Critical |
|
The cluster is predicted to be out of file descriptors within the next hour. |
Warning |
|
The JVM Heap usage on the specified node is high. |
Alert |
|
The specified node has hit the low watermark due to low free disk space. Shards can not be allocated to this node anymore. You should consider adding more disk space to the node. |
Info |
|
The specified node has hit the high watermark due to low free disk space. Some shards will be re-allocated to different nodes if possible. Make sure more disk space is added to the node or drop old indices allocated to this node. |
Warning |
|
The specified node has hit the flood watermark due to low free disk space. Every index that has a shard allocated on this node is enforced a read-only block. The index block must be manually released when the disk use falls below the high watermark. |
Critical |
|
The JVM Heap usage on the specified node is too high. |
Alert |
|
Elasticsearch is experiencing an increase in write rejections on the specified node. This node might not be keeping up with the indexing speed. |
Warning |
|
The CPU used by the system on the specified node is too high. |
Alert |
|
The CPU used by Elasticsearch on the specified node is too high. |
Alert |