$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
You can configure a local or external Alertmanager instance to route alerts from Prometheus to endpoint receivers. You can also attach custom labels to all time series and alerts to add useful metadata information.
The OpenShift Container Platform monitoring stack includes a local Alertmanager instance that routes alerts from Prometheus.
You can add external Alertmanager instances to route alerts for core OpenShift Container Platform projects.
If you add the same external Alertmanager configuration for multiple clusters and disable the local instance for each cluster, you can then manage alert routing for multiple clusters by using a single external Alertmanager instance.
You have access to the cluster as a user with the cluster-admin
cluster role.
You have created the cluster-monitoring-config
ConfigMap
object.
You have installed the OpenShift CLI (oc
).
Edit the cluster-monitoring-config
config map in the openshift-monitoring
project:
$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
Add an additionalAlertmanagerConfigs
section with configuration details under
data/config.yaml/prometheusK8s
:
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-monitoring-config
namespace: openshift-monitoring
data:
config.yaml: |
prometheusK8s:
additionalAlertmanagerConfigs:
- <alertmanager_specification> (1)
1 | Substitute <alertmanager_specification> with authentication and other configuration details for additional Alertmanager instances.
Currently supported authentication methods are bearer token (bearerToken ) and client TLS (tlsConfig ). |
The following sample config map configures an additional Alertmanager for Prometheus by using a bearer token with client TLS authentication:
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-monitoring-config
namespace: openshift-monitoring
data:
config.yaml: |
prometheusK8s:
additionalAlertmanagerConfigs:
- scheme: https
pathPrefix: /
timeout: "30s"
apiVersion: v1
bearerToken:
name: alertmanager-bearer-token
key: token
tlsConfig:
key:
name: alertmanager-tls
key: tls.key
cert:
name: alertmanager-tls
key: tls.crt
ca:
name: alertmanager-tls
key: tls.ca
staticConfigs:
- external-alertmanager1-remote.com
- external-alertmanager1-remote2.com
Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.
A local Alertmanager that routes alerts from Prometheus instances is enabled by default in the openshift-monitoring
project of the OpenShift Container Platform monitoring stack.
If you do not need the local Alertmanager, you can disable it by configuring the cluster-monitoring-config
config map in the openshift-monitoring
project.
You have access to the cluster as a user with the cluster-admin
cluster role.
You have created the cluster-monitoring-config
config map.
You have installed the OpenShift CLI (oc
).
Edit the cluster-monitoring-config
config map in the openshift-monitoring
project:
$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
Add enabled: false
for the alertmanagerMain
component under data/config.yaml
:
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-monitoring-config
namespace: openshift-monitoring
data:
config.yaml: |
alertmanagerMain:
enabled: false
Save the file to apply the changes. The Alertmanager instance is disabled automatically when you apply the change.
Alertmanager (Prometheus documentation)
The OpenShift Container Platform monitoring stack includes Alertmanager, which routes alerts from Prometheus to endpoint receivers. If you need to authenticate with a receiver so that Alertmanager can send alerts to it, you can configure Alertmanager to use a secret that contains authentication credentials for the receiver.
For example, you can configure Alertmanager to use a secret to authenticate with an endpoint receiver that requires a certificate issued by a private Certificate Authority (CA).
You can also configure Alertmanager to use a secret to authenticate with a receiver that requires a password file for Basic HTTP authentication.
In either case, authentication details are contained in the Secret
object rather than in the ConfigMap
object.
You can add secrets to the Alertmanager configuration by editing the cluster-monitoring-config
config map in the openshift-monitoring
project.
After you add a secret to the config map, the secret is mounted as a volume at /etc/alertmanager/secrets/<secret_name>
within the alertmanager
container for the Alertmanager pods.
You have access to the cluster as a user with the cluster-admin
cluster role.
You have created the cluster-monitoring-config
config map.
You have created the secret to be configured in Alertmanager in the openshift-monitoring
project.
You have installed the OpenShift CLI (oc
).
Edit the cluster-monitoring-config
config map in the openshift-monitoring
project:
$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
Add a secrets:
section under data/config.yaml/alertmanagerMain
with the following configuration:
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-monitoring-config
namespace: openshift-monitoring
data:
config.yaml: |
alertmanagerMain:
secrets: (1)
- <secret_name_1> (2)
- <secret_name_2>
1 | This section contains the secrets to be mounted into Alertmanager. The secrets must be located within the same namespace as the Alertmanager object. |
2 | The name of the Secret object that contains authentication credentials for the receiver. If you add multiple secrets, place each one on a new line. |
The following sample config map settings configure Alertmanager to use two Secret
objects named test-secret-basic-auth
and test-secret-api-token
:
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-monitoring-config
namespace: openshift-monitoring
data:
config.yaml: |
alertmanagerMain:
secrets:
- test-secret-basic-auth
- test-secret-api-token
Save the file to apply the changes. The new configuration is applied automatically.
You can attach custom labels to all time series and alerts leaving Prometheus by using the external labels feature of Prometheus.
You have access to the cluster as a user with the cluster-admin
cluster role.
You have created the cluster-monitoring-config
ConfigMap
object.
You have installed the OpenShift CLI (oc
).
Edit the cluster-monitoring-config
config map in the openshift-monitoring
project:
$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
Define labels you want to add for every metric under data/config.yaml
:
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-monitoring-config
namespace: openshift-monitoring
data:
config.yaml: |
prometheusK8s:
externalLabels:
<key>: <value> (1)
1 | Substitute <key>: <value> with key-value pairs where <key> is a unique name for the new label and <value> is its value. |
|
For example, to add metadata about the region and environment to all time series and alerts, use the following example:
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-monitoring-config
namespace: openshift-monitoring
data:
config.yaml: |
prometheusK8s:
externalLabels:
region: eu
environment: prod
Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.
In OpenShift Container Platform 4.17, you can view firing alerts in the Alerting UI. You can configure Alertmanager to send notifications about default platform alerts by configuring alert receivers.
Alertmanager does not send notifications by default. It is strongly recommended to configure Alertmanager to receive notifications by configuring alert receivers through the web console or through the |
PagerDuty (PagerDuty official site)
Prometheus Integration Guide (PagerDuty official site)
You can configure Alertmanager to send notifications. Customize where and how Alertmanager sends notifications about default platform alerts by editing the default configuration in the alertmanager-main
secret in the openshift-monitoring
namespace.
All features of a supported version of upstream Alertmanager are also supported in an OpenShift Container Platform Alertmanager configuration. To check all the configuration options of a supported version of upstream Alertmanager, see Alertmanager configuration (Prometheus documentation). |
You have access to the cluster as a user with the cluster-admin
cluster role.
Open the Alertmanager YAML configuration file:
To open the Alertmanager configuration from the CLI:
Print the currently active Alertmanager configuration from the alertmanager-main
secret into alertmanager.yaml
file:
$ oc -n openshift-monitoring get secret alertmanager-main --template='{{ index .data "alertmanager.yaml" }}' | base64 --decode > alertmanager.yaml
Open the alertmanager.yaml
file.
To open the Alertmanager configuration from the OpenShift Container Platform web console:
Go to the Administration → Cluster Settings → Configuration → Alertmanager → YAML page of the web console.
Edit the Alertmanager configuration by updating parameters in the YAML:
global:
resolve_timeout: 5m
route:
group_wait: 30s (1)
group_interval: 5m (2)
repeat_interval: 12h (3)
receiver: default
routes:
- matchers:
- "alertname=Watchdog"
repeat_interval: 2m
receiver: watchdog
- matchers:
- "service=<your_service>" (4)
routes:
- matchers:
- <your_matching_rules> (5)
receiver: <receiver> (6)
receivers:
- name: default
- name: watchdog
- name: <receiver>
<receiver_configuration> (7)
1 | Specify how long Alertmanager waits while collecting initial alerts for a group of alerts before sending a notification. |
2 | Specify how much time must elapse before Alertmanager sends a notification about new alerts added to a group of alerts for which an initial notification was already sent. |
3 | Specify the minimum amount of time that must pass before an alert notification is repeated.
If you want a notification to repeat at each group interval, set the repeat_interval value to less than the group_interval value.
The repeated notification can still be delayed, for example, when certain Alertmanager pods are restarted or rescheduled. |
4 | Specify the name of the service that fires the alerts. |
5 | Specify labels to match your alerts. |
6 | Specify the name of the receiver to use for the alerts. |
7 | Specify the receiver configuration. |
|
The following Alertmanager configuration example configures PagerDuty as an alert receiver:
global:
resolve_timeout: 5m
route:
group_wait: 30s
group_interval: 5m
repeat_interval: 12h
receiver: default
routes:
- matchers:
- "alertname=Watchdog"
repeat_interval: 2m
receiver: watchdog
- matchers:
- "service=example-app"
routes:
- matchers:
- "severity=critical"
receiver: team-frontend-page
receivers:
- name: default
- name: watchdog
- name: team-frontend-page
pagerduty_configs:
- service_key: "<your_key>"
With this configuration, alerts of critical
severity that are fired by the example-app
service are sent through the team-frontend-page
receiver. Typically, these types of alerts would be paged to an individual or a critical response team.
Apply the new configuration in the file:
To apply the changes from the CLI, run the following command:
$ oc -n openshift-monitoring create secret generic alertmanager-main --from-file=alertmanager.yaml --dry-run=client -o=yaml | oc -n openshift-monitoring replace secret --filename=-
To apply the changes from the OpenShift Container Platform web console, click Save.
You can configure alert routing through the OpenShift Container Platform web console to ensure that you learn about important issues with your cluster.
The OpenShift Container Platform web console provides fewer settings to configure alert routing than the |
You have access to the cluster as a user with the cluster-admin
cluster role.
In the Administrator perspective, go to Administration → Cluster Settings → Configuration → Alertmanager.
Alternatively, you can go to the same page through the notification drawer. Select the bell icon at the top right of the OpenShift Container Platform web console and choose Configure in the AlertmanagerReceiverNotConfigured alert. |
Click Create Receiver in the Receivers section of the page.
In the Create Receiver form, add a Receiver name and choose a Receiver type from the list.
Edit the receiver configuration:
For PagerDuty receivers:
Choose an integration type and add a PagerDuty integration key.
Add the URL of your PagerDuty installation.
Click Show advanced configuration if you want to edit the client and incident details or the severity specification.
For webhook receivers:
Add the endpoint to send HTTP POST requests to.
Click Show advanced configuration if you want to edit the default option to send resolved alerts to the receiver.
For email receivers:
Add the email address to send notifications to.
Add SMTP configuration details, including the address to send notifications from, the smarthost and port number used for sending emails, the hostname of the SMTP server, and authentication details.
Select whether TLS is required.
Click Show advanced configuration if you want to edit the default option not to send resolved alerts to the receiver or edit the body of email notifications configuration.
For Slack receivers:
Add the URL of the Slack webhook.
Add the Slack channel or user name to send notifications to.
Select Show advanced configuration if you want to edit the default option not to send resolved alerts to the receiver or edit the icon and username configuration. You can also choose whether to find and link channel names and usernames.
By default, firing alerts with labels that match all of the selectors are sent to the receiver. If you want label values for firing alerts to be matched exactly before they are sent to the receiver, perform the following steps:
Add routing label names and values in the Routing labels section of the form.
Click Add label to add further routing labels.
Click Create to create the receiver.
You can configure different alert receivers for default platform alerts and user-defined alerts to ensure the following results:
All default platform alerts are sent to a receiver owned by the team in charge of these alerts.
All user-defined alerts are sent to another receiver so that the team can focus only on platform alerts.
You can achieve this by using the openshift_io_alert_source="platform"
label that is added by the Cluster Monitoring Operator to all platform alerts:
Use the openshift_io_alert_source="platform"
matcher to match default platform alerts.
Use the openshift_io_alert_source!="platform"
or 'openshift_io_alert_source=""'
matcher to match user-defined alerts.
This configuration does not apply if you have enabled a separate instance of Alertmanager dedicated to user-defined alerts. |