×

The OpenShift Container Platform 4 installation program provides only a low number of configuration options before installation. Configuring most OpenShift Container Platform framework components, including the cluster monitoring stack, happens post-installation.

This section explains what configuration is supported, shows how to configure the monitoring stack, and demonstrates several common configuration scenarios.

Prerequisites

  • The monitoring stack imposes additional resource requirements. Consult the computing resources recommendations in Scaling the Cluster Monitoring Operator and verify that you have sufficient resources.

Maintenance and support for monitoring

The supported way of configuring OpenShift Container Platform Monitoring is by configuring it using the options described in this document. Do not use other configurations, as they are unsupported. Configuration paradigms might change across Prometheus releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this section, your changes will disappear because the cluster-monitoring-operator reconciles any differences. The Operator resets everything to the defined state by default and by design.

Support considerations for monitoring

The following modifications are explicitly not supported:

  • Creating additional ServiceMonitor, PodMonitor, and PrometheusRule objects in the openshift-* and kube-* projects.

  • Modifying any resources or objects deployed in the openshift-monitoring or openshift-user-workload-monitoring projects. The resources created by the OpenShift Container Platform monitoring stack are not meant to be used by any other resources, as there are no guarantees about their backward compatibility.

    The Alertmanager configuration is deployed as a secret resource in the openshift-monitoring project. To configure additional routes for Alertmanager, you need to decode, modify, and then encode that secret. This procedure is a supported exception to the preceding statement.

  • Modifying resources of the stack. The OpenShift Container Platform monitoring stack ensures its resources are always in the state it expects them to be. If they are modified, the stack will reset them.

  • Deploying user-defined workloads to openshift-*, and kube-* projects. These projects are reserved for Red Hat provided components and they should not be used for user-defined workloads.

  • Modifying the monitoring stack Grafana instance.

  • Installing custom Prometheus instances on OpenShift Container Platform. A custom instance is a Prometheus custom resource (CR) managed by the Prometheus Operator.

  • Enabling symptom based monitoring by using the Probe custom resource definition (CRD) in Prometheus Operator.

  • Modifying Alertmanager configurations by using the AlertmanagerConfig CRD in Prometheus Operator.

Backward compatibility for metrics, recording rules, or alerting rules is not guaranteed.

Support policy for monitoring Operators

Monitoring Operators ensure that OpenShift Container Platform monitoring resources function as designed and tested. If Cluster Version Operator (CVO) control of an Operator is overridden, the Operator does not respond to configuration changes, reconcile the intended state of cluster objects, or receive updates.

While overriding CVO control for an Operator can be helpful during debugging, this is unsupported and the cluster administrator assumes full control of the individual component configurations and upgrades.

Overriding the Cluster Version Operator

The spec.overrides parameter can be added to the configuration for the CVO to allow administrators to provide a list of overrides to the behavior of the CVO for a component. Setting the spec.overrides[].unmanaged parameter to true for a component blocks cluster upgrades and alerts the administrator after a CVO override has been set:

Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.

Setting a CVO override puts the entire cluster in an unsupported state and prevents the monitoring stack from being reconciled to its intended state. This impacts the reliability features built into Operators and prevents updates from being received. Reported issues must be reproduced after removing any overrides for support to proceed.

Preparing to configure the monitoring stack

You can configure the monitoring stack by creating and updating monitoring config maps.

Creating a cluster monitoring config map

To configure core OpenShift Container Platform monitoring components, you must create the cluster-monitoring-config ConfigMap object in the openshift-monitoring project.

When you save your changes to the cluster-monitoring-config ConfigMap object, some or all of the pods in the openshift-monitoring project might be redeployed. It can sometimes take a while for these components to redeploy.

Prerequisites
  • You have access to the cluster as a user with the cluster-admin role.

  • You have installed the OpenShift CLI (oc).

Procedure
  1. Check whether the cluster-monitoring-config ConfigMap object exists:

    $ oc -n openshift-monitoring get configmap cluster-monitoring-config
  2. If the ConfigMap object does not exist:

    1. Create the following YAML manifest. In this example the file is called cluster-monitoring-config.yaml:

      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: cluster-monitoring-config
        namespace: openshift-monitoring
      data:
        config.yaml: |
    2. Apply the configuration to create the ConfigMap object:

      $ oc apply -f cluster-monitoring-config.yaml

Creating a user-defined workload monitoring config map

To configure the components that monitor user-defined projects, you must create the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project.

When you save your changes to the user-workload-monitoring-config ConfigMap object, some or all of the pods in the openshift-user-workload-monitoring project might be redeployed. It can sometimes take a while for these components to redeploy. You can create and configure the config map before you first enable monitoring for user-defined projects, to prevent having to redeploy the pods often.

Prerequisites
  • You have access to the cluster as a user with the cluster-admin role.

  • You have installed the OpenShift CLI (oc).

Procedure
  1. Check whether the user-workload-monitoring-config ConfigMap object exists:

    $ oc -n openshift-user-workload-monitoring get configmap user-workload-monitoring-config
  2. If the user-workload-monitoring-config ConfigMap object does not exist:

    1. Create the following YAML manifest. In this example the file is called user-workload-monitoring-config.yaml:

      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: user-workload-monitoring-config
        namespace: openshift-user-workload-monitoring
      data:
        config.yaml: |
    2. Apply the configuration to create the ConfigMap object:

      $ oc apply -f user-workload-monitoring-config.yaml

      Configurations applied to the user-workload-monitoring-config ConfigMap object are not activated unless a cluster administrator has enabled monitoring for user-defined projects.

Configuring the monitoring stack

In OpenShift Container Platform 4.10, you can configure the monitoring stack using the cluster-monitoring-config or user-workload-monitoring-config ConfigMap objects. Config maps configure the Cluster Monitoring Operator (CMO), which in turn configures the components of the stack.

Prerequisites
  • If you are configuring core OpenShift Container Platform monitoring components:

    • You have access to the cluster as a user with the cluster-admin role.

    • You have created the cluster-monitoring-config ConfigMap object.

  • If you are configuring components that monitor user-defined projects:

    • You have access to the cluster as a user with the cluster-admin role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project.

    • You have created the user-workload-monitoring-config ConfigMap object.

  • You have installed the OpenShift CLI (oc).

Procedure
  1. Edit the ConfigMap object.

    • To configure core OpenShift Container Platform monitoring components:

      1. Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project:

        $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
      2. Add your configuration under data/config.yaml as a key-value pair <component_name>: <component_configuration>:

        apiVersion: v1
        kind: ConfigMap
        metadata:
          name: cluster-monitoring-config
          namespace: openshift-monitoring
        data:
          config.yaml: |
            <component>:
              <configuration_for_the_component>

        Substitute <component> and <configuration_for_the_component> accordingly.

        The following example ConfigMap object configures a persistent volume claim (PVC) for Prometheus. This relates to the Prometheus instance that monitors core OpenShift Container Platform components only:

        apiVersion: v1
        kind: ConfigMap
        metadata:
          name: cluster-monitoring-config
          namespace: openshift-monitoring
        data:
          config.yaml: |
            prometheusK8s: (1)
              volumeClaimTemplate:
                spec:
                  storageClassName: fast
                  volumeMode: Filesystem
                  resources:
                    requests:
                      storage: 40Gi
        1 Defines the Prometheus component and the subsequent lines define its configuration.
    • To configure components that monitor user-defined projects:

      1. Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project:

        $ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
      2. Add your configuration under data/config.yaml as a key-value pair <component_name>: <component_configuration>:

        apiVersion: v1
        kind: ConfigMap
        metadata:
          name: user-workload-monitoring-config
          namespace: openshift-user-workload-monitoring
        data:
          config.yaml: |
            <component>:
              <configuration_for_the_component>

        Substitute <component> and <configuration_for_the_component> accordingly.

        The following example ConfigMap object configures a data retention period and minimum container resource requests for Prometheus. This relates to the Prometheus instance that monitors user-defined projects only:

        apiVersion: v1
        kind: ConfigMap
        metadata:
          name: user-workload-monitoring-config
          namespace: openshift-user-workload-monitoring
        data:
          config.yaml: |
            prometheus: (1)
              retention: 24h (2)
              resources:
                requests:
                  cpu: 200m (3)
                  memory: 2Gi (4)
        1 Defines the Prometheus component and the subsequent lines define its configuration.
        2 Configures a twenty-four hour data retention period for the Prometheus instance that monitors user-defined projects.
        3 Defines a minimum resource request of 200 millicores for the Prometheus container.
        4 Defines a minimum pod resource request of 2 GiB of memory for the Prometheus container.

        The Prometheus config map component is called prometheusK8s in the cluster-monitoring-config ConfigMap object and prometheus in the user-workload-monitoring-config ConfigMap object.

  2. Save the file to apply the changes to the ConfigMap object. The pods affected by the new configuration are restarted automatically.

    Configurations applied to the user-workload-monitoring-config ConfigMap object are not activated unless a cluster administrator has enabled monitoring for user-defined projects.

    When changes are saved to a monitoring config map, the pods and other resources in the related project might be redeployed. The running monitoring processes in that project might also be restarted.

Additional resources

Configurable monitoring components

This table shows the monitoring components you can configure and the keys used to specify the components in the cluster-monitoring-config and user-workload-monitoring-config ConfigMap objects:

Table 1. Configurable monitoring components
Component cluster-monitoring-config config map key user-workload-monitoring-config config map key

Prometheus Operator

prometheusOperator

prometheusOperator

Prometheus

prometheusK8s

prometheus

Alertmanager

alertmanagerMain

kube-state-metrics

kubeStateMetrics

openshift-state-metrics

openshiftStateMetrics

Grafana

grafana

Telemeter Client

telemeterClient

Prometheus Adapter

k8sPrometheusAdapter

Thanos Querier

thanosQuerier

Thanos Ruler

thanosRuler

The Prometheus key is called prometheusK8s in the cluster-monitoring-config ConfigMap object and prometheus in the user-workload-monitoring-config ConfigMap object.

Moving monitoring components to different nodes

You can move any of the monitoring stack components to specific nodes.

Prerequisites
  • If you are configuring core OpenShift Container Platform monitoring components:

    • You have access to the cluster as a user with the cluster-admin role.

    • You have created the cluster-monitoring-config ConfigMap object.

  • If you are configuring components that monitor user-defined projects:

    • You have access to the cluster as a user with the cluster-admin role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project.

    • You have created the user-workload-monitoring-config ConfigMap object.

  • You have installed the OpenShift CLI (oc).

Procedure
  1. Edit the ConfigMap object:

    • To move a component that monitors core OpenShift Container Platform projects:

      1. Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project:

        $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
      2. Specify the nodeSelector constraint for the component under data/config.yaml:

        apiVersion: v1
        kind: ConfigMap
        metadata:
          name: cluster-monitoring-config
          namespace: openshift-monitoring
        data:
          config.yaml: |
            <component>:
              nodeSelector:
                <node_key>: <node_value>
                <node_key>: <node_value>
                <...>

        Substitute <component> accordingly and substitute <node_key>: <node_value> with the map of key-value pairs that specifies a group of destination nodes. Often, only a single key-value pair is used.

        The component can only run on nodes that have each of the specified key-value pairs as labels. The nodes can have additional labels as well.

        Many of the monitoring components are deployed by using multiple pods across different nodes in the cluster to maintain high availability. When moving monitoring components to labeled nodes, ensure that enough matching nodes are available to maintain resilience for the component. If only one label is specified, ensure that enough nodes contain that label to distribute all of the pods for the component across separate nodes. Alternatively, you can specify multiple labels each relating to individual nodes.

        If monitoring components remain in a Pending state after configuring the nodeSelector constraint, check the pod logs for errors relating to taints and tolerations.

        For example, to move monitoring components for core OpenShift Container Platform projects to specific nodes that are labeled nodename: controlplane1, nodename: worker1, nodename: worker2, and nodename: worker2, use:

        apiVersion: v1
        kind: ConfigMap
        metadata:
          name: cluster-monitoring-config
          namespace: openshift-monitoring
        data:
          config.yaml: |
            prometheusOperator:
              nodeSelector:
                nodename: controlplane1
            prometheusK8s:
              nodeSelector:
                nodename: worker1
                nodename: worker2
            alertmanagerMain:
              nodeSelector:
                nodename: worker1
                nodename: worker2
            kubeStateMetrics:
              nodeSelector:
                nodename: worker1
            grafana:
              nodeSelector:
                nodename: worker1
            telemeterClient:
              nodeSelector:
                nodename: worker1
            k8sPrometheusAdapter:
              nodeSelector:
                nodename: worker1
                nodename: worker2
            openshiftStateMetrics:
              nodeSelector:
                nodename: worker1
            thanosQuerier:
              nodeSelector:
                nodename: worker1
                nodename: worker2
    • To move a component that monitors user-defined projects:

      1. Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project:

        $ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
      2. Specify the nodeSelector constraint for the component under data/config.yaml:

        apiVersion: v1
        kind: ConfigMap
        metadata:
          name: user-workload-monitoring-config
          namespace: openshift-user-workload-monitoring
        data:
          config.yaml: |
            <component>:
              nodeSelector:
                <node_key>: <node_value>
                <node_key>: <node_value>
                <...>

        Substitute <component> accordingly and substitute <node_key>: <node_value> with the map of key-value pairs that specifies the destination nodes. Often, only a single key-value pair is used.

        The component can only run on nodes that have each of the specified key-value pairs as labels. The nodes can have additional labels as well.

        Many of the monitoring components are deployed by using multiple pods across different nodes in the cluster to maintain high availability. When moving monitoring components to labeled nodes, ensure that enough matching nodes are available to maintain resilience for the component. If only one label is specified, ensure that enough nodes contain that label to distribute all of the pods for the component across separate nodes. Alternatively, you can specify multiple labels each relating to individual nodes.

        If monitoring components remain in a Pending state after configuring the nodeSelector constraint, check the pod logs for errors relating to taints and tolerations.

        For example, to move monitoring components for user-defined projects to specific worker nodes labeled nodename: worker1, nodename: worker2, and nodename: worker2, use:

        apiVersion: v1
        kind: ConfigMap
        metadata:
          name: user-workload-monitoring-config
          namespace: openshift-user-workload-monitoring
        data:
          config.yaml: |
            prometheusOperator:
              nodeSelector:
                nodename: worker1
            prometheus:
              nodeSelector:
                nodename: worker1
                nodename: worker2
            thanosRuler:
              nodeSelector:
                nodename: worker1
                nodename: worker2
  2. Save the file to apply the changes. The components affected by the new configuration are moved to the new nodes automatically.

    Configurations applied to the user-workload-monitoring-config ConfigMap object are not activated unless a cluster administrator has enabled monitoring for user-defined projects.

    When changes are saved to a monitoring config map, the pods and other resources in the related project might be redeployed. The running monitoring processes in that project might also be restarted.

Assigning tolerations to monitoring components

You can assign tolerations to any of the monitoring stack components to enable moving them to tainted nodes.

Prerequisites
  • If you are configuring core OpenShift Container Platform monitoring components:

    • You have access to the cluster as a user with the cluster-admin role.

    • You have created the cluster-monitoring-config ConfigMap object.

  • If you are configuring components that monitor user-defined projects:

    • You have access to the cluster as a user with the cluster-admin role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project.

    • You have created the user-workload-monitoring-config ConfigMap object.

  • You have installed the OpenShift CLI (oc).

Procedure
  1. Edit the ConfigMap object:

    • To assign tolerations to a component that monitors core OpenShift Container Platform projects:

      1. Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project:

        $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
      2. Specify tolerations for the component:

        apiVersion: v1
        kind: ConfigMap
        metadata:
          name: cluster-monitoring-config
          namespace: openshift-monitoring
        data:
          config.yaml: |
            <component>:
              tolerations:
                <toleration_specification>

        Substitute <component> and <toleration_specification> accordingly.

        For example, oc adm taint nodes node1 key1=value1:NoSchedule adds a taint to node1 with the key key1 and the value value1. This prevents monitoring components from deploying pods on node1 unless a toleration is configured for that taint. The following example configures the alertmanagerMain component to tolerate the example taint:

        apiVersion: v1
        kind: ConfigMap
        metadata:
          name: cluster-monitoring-config
          namespace: openshift-monitoring
        data:
          config.yaml: |
            alertmanagerMain:
              tolerations:
              - key: "key1"
                operator: "Equal"
                value: "value1"
                effect: "NoSchedule"
    • To assign tolerations to a component that monitors user-defined projects:

      1. Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project:

        $ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
      2. Specify tolerations for the component:

        apiVersion: v1
        kind: ConfigMap
        metadata:
          name: user-workload-monitoring-config
          namespace: openshift-user-workload-monitoring
        data:
          config.yaml: |
            <component>:
              tolerations:
                <toleration_specification>

        Substitute <component> and <toleration_specification> accordingly.

        For example, oc adm taint nodes node1 key1=value1:NoSchedule adds a taint to node1 with the key key1 and the value value1. This prevents monitoring components from deploying pods on node1 unless a toleration is configured for that taint. The following example configures the thanosRuler component to tolerate the example taint:

        apiVersion: v1
        kind: ConfigMap
        metadata:
          name: user-workload-monitoring-config
          namespace: openshift-user-workload-monitoring
        data:
          config.yaml: |
            thanosRuler:
              tolerations:
              - key: "key1"
                operator: "Equal"
                value: "value1"
                effect: "NoSchedule"
  2. Save the file to apply the changes. The new component placement configuration is applied automatically.

    Configurations applied to the user-workload-monitoring-config ConfigMap object are not activated unless a cluster administrator has enabled monitoring for user-defined projects.

    When changes are saved to a monitoring config map, the pods and other resources in the related project might be redeployed. The running monitoring processes in that project might also be restarted.

Additional resources

Configuring persistent storage

Running cluster monitoring with persistent storage means that your metrics are stored to a persistent volume (PV) and can survive a pod being restarted or recreated. This is ideal if you require your metrics or alerting data to be guarded from data loss. For production environments, it is highly recommended to configure persistent storage. Because of the high IO demands, it is advantageous to use local storage.

Persistent storage prerequisites

  • Dedicate sufficient local persistent storage to ensure that the disk does not become full. How much storage you need depends on the number of pods. For information on system requirements for persistent storage, see Prometheus database storage requirements.

  • Verify that you have a persistent volume (PV) ready to be claimed by the persistent volume claim (PVC), one PV for each replica. Because Prometheus and Alertmanager both have two replicas, you need four PVs to support the entire monitoring stack. The PVs are available from the Local Storage Operator, but not if you have enabled dynamically provisioned storage.

  • Use the block type of storage.

  • Configure local persistent storage.

    If you use a local volume for persistent storage, do not use a raw block volume, which is described with volumeMode: block in the LocalVolume object. Prometheus cannot use raw block volumes.

Configuring a local persistent volume claim

For monitoring components to use a persistent volume (PV), you must configure a persistent volume claim (PVC).

Prerequisites
  • If you are configuring core OpenShift Container Platform monitoring components:

    • You have access to the cluster as a user with the cluster-admin role.

    • You have created the cluster-monitoring-config ConfigMap object.

  • If you are configuring components that monitor user-defined projects:

    • You have access to the cluster as a user with the cluster-admin role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project.

    • You have created the user-workload-monitoring-config ConfigMap object.

  • You have installed the OpenShift CLI (oc).

Procedure
  1. Edit the ConfigMap object:

    • To configure a PVC for a component that monitors core OpenShift Container Platform projects:

      1. Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project:

        $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
      2. Add your PVC configuration for the component under data/config.yaml:

        apiVersion: v1
        kind: ConfigMap