×

Accessing the Alerting UI from the Administrator perspective

The Alerting UI is accessible through the Administrator perspective of the OpenShift Container Platform web console.

  • From the Administrator perspective, go to ObserveAlerting. The three main pages in the Alerting UI in this perspective are the Alerts, Silences, and Alerting rules pages.

Getting information about alerts, silences, and alerting rules from the Administrator perspective

The Alerting UI provides detailed information about alerts and their governing alerting rules and silences.

Prerequisites
  • You have access to the cluster as a user with view permissions for the project that you are viewing alerts for.

Procedure

To obtain information about alerts:

  1. From the Administrator perspective of the OpenShift Container Platform web console, go to the ObserveAlertingAlerts page.

  2. Optional: Search for alerts by name by using the Name field in the search list.

  3. Optional: Filter alerts by state, severity, and source by selecting filters in the Filter list.

  4. Optional: Sort the alerts by clicking one or more of the Name, Severity, State, and Source column headers.

  5. Click the name of an alert to view its Alert details page. The page includes a graph that illustrates alert time series data. It also provides the following information about the alert:

    • A description of the alert

    • Messages associated with the alert

    • Labels attached to the alert

    • A link to its governing alerting rule

    • Silences for the alert, if any exist

To obtain information about silences:

  1. From the Administrator perspective of the OpenShift Container Platform web console, go to the ObserveAlertingSilences page.

  2. Optional: Filter the silences by name using the Search by name field.

  3. Optional: Filter silences by state by selecting filters in the Filter list. By default, Active and Pending filters are applied.

  4. Optional: Sort the silences by clicking one or more of the Name, Firing alerts, State, and Creator column headers.

  5. Select the name of a silence to view its Silence details page. The page includes the following details:

    • Alert specification

    • Start time

    • End time

    • Silence state

    • Number and list of firing alerts

To obtain information about alerting rules:

  1. From the Administrator perspective of the OpenShift Container Platform web console, go to the ObserveAlertingAlerting rules page.

  2. Optional: Filter alerting rules by state, severity, and source by selecting filters in the Filter list.

  3. Optional: Sort the alerting rules by clicking one or more of the Name, Severity, Alert state, and Source column headers.

  4. Select the name of an alerting rule to view its Alerting rule details page. The page provides the following details about the alerting rule:

    • Alerting rule name, severity, and description.

    • The expression that defines the condition for firing the alert.

    • The time for which the condition should be true for an alert to fire.

    • A graph for each alert governed by the alerting rule, showing the value with which the alert is firing.

    • A table of all alerts governed by the alerting rule.

Additional resources

Managing silences

You can create a silence for an alert in the OpenShift Container Platform web console in the Administrator perspective. After you create silences, you can view, edit, and expire them. You also do not receive notifications about a silenced alert when the alert fires.

When you create silences, they are replicated across Alertmanager pods. However, if you do not configure persistent storage for Alertmanager, silences might be lost. This can happen, for example, if all Alertmanager pods restart at the same time.

Silencing alerts from the Administrator perspective

You can silence a specific alert or silence alerts that match a specification that you define.

Prerequisites
  • You have access to the cluster as a user with the cluster-admin role.

Procedure

To silence a specific alert:

  1. From the Administrator perspective of the OpenShift Container Platform web console, go to ObserveAlertingAlerts.

  2. For the alert that you want to silence, click kebab and select Silence alert to open the Silence alert page with a default configuration for the chosen alert.

  3. Optional: Change the default configuration details for the silence.

    You must add a comment before saving a silence.

  4. To save the silence, click Silence.

To silence a set of alerts:

  1. From the Administrator perspective of the OpenShift Container Platform web console, go to ObserveAlertingSilences.

  2. Click Create silence.

  3. On the Create silence page, set the schedule, duration, and label details for an alert.

    You must add a comment before saving a silence.

  4. To create silences for alerts that match the labels that you entered, click Silence.

Editing silences from the Administrator perspective

You can edit a silence, which expires the existing silence and creates a new one with the changed configuration.

Prerequisites
  • If you are a cluster administrator, you have access to the cluster as a user with the cluster-admin role.

  • If you are a non-administrator user, you have access to the cluster as a user with the following user roles:

    • The cluster-monitoring-view cluster role, which allows you to access Alertmanager.

    • The monitoring-alertmanager-edit role, which permits you to create and silence alerts in the Administrator perspective in the web console.

Procedure
  1. From the Administrator perspective of the OpenShift Container Platform web console, go to ObserveAlertingSilences.

  2. For the silence you want to modify, click kebab and select Edit silence.

    Alternatively, you can click Actions and select Edit silence on the Silence details page for a silence.

  3. On the Edit silence page, make changes and click Silence. Doing so expires the existing silence and creates one with the updated configuration.

Expiring silences from the Administrator perspective

You can expire a single silence or multiple silences. Expiring a silence deactivates it permanently.

You cannot delete expired, silenced alerts. Expired silences older than 120 hours are garbage collected.

Prerequisites
  • If you are a cluster administrator, you have access to the cluster as a user with the cluster-admin role.

  • If you are a non-administrator user, you have access to the cluster as a user with the following user roles:

    • The cluster-monitoring-view cluster role, which allows you to access Alertmanager.

    • The monitoring-alertmanager-edit role, which permits you to create and silence alerts in the Administrator perspective in the web console.

Procedure
  1. Go to ObserveAlertingSilences.

  2. For the silence or silences you want to expire, select the checkbox in the corresponding row.

  3. Click Expire 1 silence to expire a single selected silence or Expire <n> silences to expire multiple selected silences, where <n> is the number of silences you selected.

    Alternatively, to expire a single silence you can click Actions and select Expire silence on the Silence details page for a silence.

Managing alerting rules for core platform monitoring

The OpenShift Container Platform monitoring includes a large set of default alerting rules for platform metrics. As a cluster administrator, you can customize this set of rules in two ways:

  • Modify the settings for existing platform alerting rules by adjusting thresholds or by adding and modifying labels. For example, you can change the severity label for an alert from warning to critical to help you route and triage issues flagged by an alert.

  • Define and add new custom alerting rules by constructing a query expression based on core platform metrics in the openshift-monitoring project.

Creating new alerting rules

As a cluster administrator, you can create new alerting rules based on platform metrics. These alerting rules trigger alerts based on the values of chosen metrics.

  • If you create a customized AlertingRule resource based on an existing platform alerting rule, silence the original alert to avoid receiving conflicting alerts.

  • To help users understand the impact and cause of the alert, ensure that your alerting rule contains an alert message and severity value.

Prerequisites
  • You have access to the cluster as a user that has the cluster-admin cluster role.

  • You have installed the OpenShift CLI (oc).

Procedure
  1. Create a new YAML configuration file named example-alerting-rule.yaml.

  2. Add an AlertingRule resource to the YAML file. The following example creates a new alerting rule named example, similar to the default Watchdog alert:

    apiVersion: monitoring.openshift.io/v1
    kind: AlertingRule
    metadata:
      name: example
      namespace: openshift-monitoring (1)
    spec:
      groups:
      - name: example-rules
        rules:
        - alert: ExampleAlert (2)
          for: 1m (3)
          expr: vector(1) (4)
          labels:
            severity: warning (5)
          annotations:
            message: This is an example alert. (6)
    1 Ensure that the namespace is openshift-monitoring.
    2 The name of the alerting rule you want to create.
    3 The duration for which the condition should be true before an alert is fired.
    4 The PromQL query expression that defines the new rule.
    5 The severity that alerting rule assigns to the alert.
    6 The message associated with the alert.

    You must create the AlertingRule object in the openshift-monitoring namespace. Otherwise, the alerting rule is not accepted.

  3. Apply the configuration file to the cluster:

    $ oc apply -f example-alerting-rule.yaml

Modifying core platform alerting rules

As a cluster administrator, you can modify core platform alerts before Alertmanager routes them to a receiver. For example, you can change the severity label of an alert, add a custom label, or exclude an alert from being sent to Alertmanager.

Prerequisites
  • You have access to the cluster as a user with the cluster-admin cluster role.

  • You have installed the OpenShift CLI (oc).

Procedure
  1. Create a new YAML configuration file named example-modified-alerting-rule.yaml.

  2. Add an AlertRelabelConfig resource to the YAML file. The following example modifies the severity setting to critical for the default platform watchdog alerting rule:

    apiVersion: monitoring.openshift.io/v1
    kind: AlertRelabelConfig
    metadata:
      name: watchdog
      namespace: openshift-monitoring (1)
    spec:
      configs:
      - sourceLabels: [alertname,severity] (2)
        regex: "Watchdog;none" (3)
        targetLabel: severity (4)
        replacement: critical (5)
        action: Replace (6)
    1 Ensure that the namespace is openshift-monitoring.
    2 The source labels for the values you want to modify.
    3 The regular expression against which the value of sourceLabels is matched.
    4 The target label of the value you want to modify.
    5 The new value to replace the target label.
    6 The relabel action that replaces the old value based on regex matching. The default action is Replace. Other possible values are Keep, Drop, HashMod, LabelMap, LabelDrop, and LabelKeep.

    You must create the AlertRelabelConfig object in the openshift-monitoring namespace. Otherwise, the alert label will not change.

  3. Apply the configuration file to the cluster:

    $ oc apply -f example-modified-alerting-rule.yaml
Additional resources

Managing alerting rules for user-defined projects

In OpenShift Container Platform, you can create, view, edit, and remove alerting rules for user-defined projects. Those alerting rules will trigger alerts based on the values of the chosen metrics.

Creating alerting rules for user-defined projects

You can create alerting rules for user-defined projects. Those alerting rules will trigger alerts based on the values of the chosen metrics.

To help users understand the impact and cause of the alert, ensure that your alerting rule contains an alert message and severity value.

Prerequisites
  • You have enabled monitoring for user-defined projects.

  • You are logged in as a cluster administrator or as a user that has the monitoring-rules-edit cluster role for the project where you want to create an alerting rule.

  • You have installed the OpenShift CLI (oc).

Procedure
  1. Create a YAML file for alerting rules. In this example, it is called example-app-alerting-rule.yaml.

  2. Add an alerting rule configuration to the YAML file. The following example creates a new alerting rule named example-alert. The alerting rule fires an alert when the version metric exposed by the sample service becomes 0:

    apiVersion: monitoring.coreos.com/v1
    kind: PrometheusRule
    metadata:
      name: example-alert
      namespace: ns1
    spec:
      groups:
      - name: example
        rules:
        - alert: VersionAlert (1)
          for: 1m (2)
          expr: version{job="prometheus-example-app"} == 0 (3)
          labels:
            severity: warning (4)
          annotations:
            message: This is an example alert. (5)
    1 The name of the alerting rule you want to create.
    2 The duration for which the condition should be true before an alert is fired.
    3 The PromQL query expression that defines the new rule.
    4 The severity that alerting rule assigns to the alert.
    5 The message associated with the alert.
  3. Apply the configuration file to the cluster:

    $ oc apply -f example-app-alerting-rule.yaml

Creating cross-project alerting rules for user-defined projects

You can create alerting rules for user-defined projects that are not bound to their project of origin by configuring a project in the user-workload-monitoring-config config map. This allows you to create generic alerting rules that get applied to multiple user-defined projects instead of having individual PrometheusRule objects in each user project.

Prerequisites
  • If you are a cluster administrator, you have access to the cluster as a user with the cluster-admin cluster role.

  • If you are a non-administrator user, you have access to the cluster as a user with the following user roles:

    • The user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project to edit the user-workload-monitoring-config config map.

    • The monitoring-rules-edit cluster role for the project where you want to create an alerting rule.

  • A cluster administrator has enabled monitoring for user-defined projects.

  • You have installed the OpenShift CLI (oc).

Procedure
  1. Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project:

    $ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
  2. Configure projects in which you want to create alerting rules that are not bound to a specific project:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: user-workload-monitoring-config
      namespace: openshift-user-workload-monitoring
    data:
      config.yaml: |
        namespacesWithoutLabelEnforcement: [ <namespace> ] (1)
        # ...
    1 Specify one or more projects in which you want to create cross-project alerting rules. Prometheus and Thanos Ruler for user-defined monitoring do not enforce the namespace label in PrometheusRule objects created in these projects.
  3. Create a YAML file for alerting rules. In this example, it is called example-cross-project-alerting-rule.yaml.

  4. Add an alerting rule configuration to the YAML file. The following example creates a new cross-project alerting rule called example-security. The alerting rule fires when a user project does not enforce the restricted pod security policy:

    Example cross-project alerting rule
    apiVersion: monitoring.coreos.com/v1
    kind: PrometheusRule
    metadata:
      name: example-security
    namespace: ns1 (1)
    spec:
      groups:
        - name: pod-security-policy
          rules:
            - alert: "ProjectNotEnforcingRestrictedPolicy" (2)
              for: 5m (3)
              expr: kube_namespace_labels{namespace!~"(openshift|kube).*|default",label_pod_security_kubernetes_io_enforce!="restricted"} (4)
              annotations:
                message: "Restricted policy not enforced. Project {{ $labels.namespace }} does not enforce the restricted pod security policy." (5)
              labels:
                severity: warning (6)
    1 Ensure that you specify the project that you defined in the namespacesWithoutLabelEnforcement field.
    2 The name of the alerting rule you want to create.
    3 The duration for which the condition should be true before an alert is fired.
    4 The PromQL query expression that defines the new rule.
    5 The message associated with the alert.
    6 The severity that alerting rule assigns to the alert.

    Ensure that you create a specific cross-project alerting rule in only one of the projects that you specified in the namespacesWithoutLabelEnforcement field. If you create the same cross-project alerting rule in multiple projects, it results in repeated alerts.

  5. Apply the configuration file to the cluster:

    $ oc apply -f example-cross-project-alerting-rule.yaml
Additional resources

Listing alerting rules for all projects in a single view

As a cluster administrator, you can list alerting rules for core OpenShift Container Platform and user-defined projects together in a single view.

Prerequisites
  • You have access to the cluster as a user with the cluster-admin role.

  • You have installed the OpenShift CLI (oc).

Procedure
  1. From the Administrator perspective of the OpenShift Container Platform web console, go to ObserveAlertingAlerting rules.

  2. Select the Platform and User sources in the Filter drop-down menu.

    The Platform source is selected by default.

Removing alerting rules for user-defined projects

You can remove alerting rules for user-defined projects.

Prerequisites
  • You have enabled monitoring for user-defined projects.

  • You are logged in as a cluster administrator or as a user that has the monitoring-rules-edit cluster role for the project where you want to create an alerting rule.

  • You have installed the OpenShift CLI (oc).

Procedure
  • To remove rule <foo> in <namespace>, run the following:

    $ oc -n <namespace> delete prometheusrule <foo>

Disabling cross-project alerting rules for user-defined projects

Creating cross-project alerting rules for user-defined projects is enabled by default. Cluster administrators can disable the capability in the cluster-monitoring-config config map for the following reasons:

  • To prevent user-defined monitoring from overloading the cluster monitoring stack.

  • To prevent buggy alerting rules from being applied to the cluster without having to identify the rule that causes the issue.

Prerequisites
  • You have access to the cluster as a user with the cluster-admin cluster role.

  • You have installed the OpenShift CLI (oc).

Procedure
  1. Edit the cluster-monitoring-config config map in the openshift-monitoring project:

    $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  2. In the cluster-monitoring-config config map, disable the option to create cross-project alerting rules by setting the rulesWithoutLabelEnforcementAllowed value under data/config.yaml/userWorkload to false:

    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        userWorkload:
          rulesWithoutLabelEnforcementAllowed: false
        # ...
  3. Save the file to apply the changes.

Additional resources