×

As a developer, you can use the custom metrics autoscaler to specify how OpenShift Container Platform should automatically increase or decrease the number of pods for a deployment, stateful set, custom resource, or job based on custom metrics that are not just based on CPU or memory.

The custom metrics autoscaler currently supports only the Prometheus trigger, which can use the installed OpenShift Container Platform monitoring or an external Prometheus server as the metrics source.

The custom metrics autoscaler is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.

Understanding the custom metrics autoscaler

The custom metrics autoscaler uses the Kubernetes-based Event Driven Autoscaler (KEDA) and is built on top of the OpenShift Container Platform horizontal pod autoscaler (HPA).

The Custom Metrics Autoscaler Operator scales your pods up and down based on custom, external metrics from specific applications. Your other applications continue to use other scaling methods. You configure triggers, also known as scalers, which are the source of events and metrics that the custom metrics autoscaler uses to determine how to scale. The custom metrics autoscaler uses a metrics API to convert the external metrics to a form that OpenShift Container Platform can use. The custom metrics autoscaler creates a horizontal pod autoscaler (HPA) that performs the actual scaling. The custom metrics autoscaler currently supports only the Prometheus trigger, which can use the installed OpenShift Container Platform monitoring or an external Prometheus server as the metrics source.

To use the custom metrics autoscaler, you create a ScaledObject or ScaledJob object, which defines the scaling metadata. You specify the deployment or job to scale, the source of the metrics to scale on (trigger), and other parameters such as the minimum and maximum replica counts allowed.

You can create only one scaled object or scaled job for each workload that you want to scale. Also, you cannot use a scaled object or scaled job and the horizontal pod autoscaler (HPA) on the same workload.

You can verify that the autoscaling has taken place by reviewing the number of pods in your custom resource or by reviewing the Custom Metrics Autoscaler Operator logs for messages similar to the following:

Successfully set ScaleTarget replica count
Successfully updated ScaleTarget

Installing the custom metrics autoscaler

You can use the OpenShift Container Platform web console to install the Custom Metrics Autoscaler Operator.

The installation creates five CRDs:

  • ClusterTriggerAuthentication

  • KedaController

  • ScaledJob

  • ScaledObject

  • TriggerAuthentication

Prerequisites
  • If you use the community KEDA:

    • Uninstall the community KEDA. You cannot run both KEDA and the custom metrics autoscaler on the same OpenShift Container Platform cluster.

    • Remove the KEDA 1.x custom resource definitions by running the following commands:

      $ oc delete crd scaledobjects.keda.k8s.io
      $ oc delete crd triggerauthentications.keda.k8s.io
Procedure
  1. In the OpenShift Container Platform web console, click OperatorsOperatorHub.

  2. Choose Custom Metrics Autoscaler from the list of available Operators, and click Install.

  3. On the Install Operator page, ensure that the All namespaces on the cluster (default) option is selected for Installation Mode. This installs the Operator in all namespaces.

  4. Ensure that the openshift-keda namespace is selected for Installed Namespace. OpenShift Container Platform creates the namespace, if not present in your cluster.

  5. Click Install.

  6. Verify the installation by listing the Custom Metrics Autoscaler Operator components:

    1. Navigate to WorkloadsPods.

    2. Select the openshift-keda project from the drop-down menu and verify that the custom-metrics-autoscaler-operator-* pod is running.

    3. Navigate to WorkloadsDeployments to verify that the custom-metrics-autoscaler-operator deployment is running.

  7. Optional: Verify the installation in the OpenShift CLI using the following commands:

    $ oc get all -n openshift-keda

    The output appears similar to the following:

    Example output
    NAME                                                      READY   STATUS    RESTARTS   AGE
    pod/custom-metrics-autoscaler-operator-5fd8d9ffd8-xt4xp   1/1     Running   0          18m
    
    NAME                                                 READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/custom-metrics-autoscaler-operator   1/1     1            1           18m
    
    NAME                                                            DESIRED   CURRENT   READY   AGE
    replicaset.apps/custom-metrics-autoscaler-operator-5fd8d9ffd8   1         1         1       18m
  8. Install the KedaController custom resource, which creates the required CRDs:

    1. In the OpenShift Container Platform web console, click OperatorsInstalled Operators.

    2. Click Custom Metrics Autoscaler.

    3. On the Operator Details page, click the KedaController tab.

    4. On the KedaController tab, click Create KedaController and edit the file.

      kind: KedaController
      apiVersion: keda.sh/v1alpha1
      metadata:
        name: keda
        namespace: openshift-keda
      spec:
        watchNamespace: '' (1)
        operator:
          logLevel: info (2)
          logEncoder: console (3)
        metricsServer:
          logLevel: '0' (4)
        serviceAccount: {}
      1 Specifies the namespaces that the custom autoscaler should watch. Enter names in a comma-separated list. Omit or set empty to watch all namespaces. The default is empty.
      2 Specifies the level of verbosity for the Custom Metrics Autoscaler Operator log messages. The allowed values are debug, info, error. The default is info.
      3 Specifies the logging format for the Custom Metrics Autoscaler Operator log messages. The allowed values are console or json. The default is console.
      4 Specifies the logging level for the Custom Metrics Autoscaler Metrics Server. The allowed values are 0 for info and 4 or debug. The default is 0.
    5. Click Create to create the KEDAController.

Understanding the custom metrics autoscaler triggers

Triggers, also known as scalers, provide the metrics that the Custom Metrics Autoscaler Operator uses to scale your pods.

The custom metrics autoscaler currently supports only the Prometheus trigger, which can use the installed OpenShift Container Platform monitoring or an external Prometheus server as the metrics source.

You use a ScaledObject or ScaledJob custom resource to configure triggers for specific objects, as described in the sections that follow.

Understanding the Prometheus trigger

Scale applications based on Prometheus metrics. See Additional resources for information on the configurations required to use the OpenShift Container Platform monitoring as a source for metrics.

If Prometheus is taking metrics from the application that the custom metrics autoscaler is scaling, do not set the minimum replicas to 0 in the custom resource. If there are no application pods, the custom metrics autoscaler does not have any metrics to scale on.

Example scaled object with a Prometheus target
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: prom-scaledobject
  namespace: my-namespace
spec:
 ...
  triggers:
  - type: prometheus (1)
    metadata:
      serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 (2)
      namespace: kedatest (3)
      metricName: http_requests_total (4)
      threshold: '5' (5)
      query: sum(rate(http_requests_total{job="test-app"}[1m])) (6)
      authModes: "basic" (7)
1 Specifies Prometheus as the scaler/trigger type.
2 Specifies the address of the Prometheus server. This example uses OpenShift Container Platform monitoring.
3 Optional: Specifies the namespace of the object you want to scale. This parameter is mandatory if OpenShift Container Platform monitoring as a source for the metrics.
4 Specifies the name to identify the metric in the external.metrics.k8s.io API. If you are using more than one trigger, all metric names must be unique.
5 Specifies the value to start scaling for.
6 Specifies the Prometheus query to use.
7 Specifies the authentication method to use. Prometheus scalers support bearer authentication, basic authentication, or TLS authentication. You configure the specific authentication parameters in a trigger authentication, as discussed in a following section. As needed, you can also use a secret.

Understanding custom metrics autoscaler trigger authentications

A trigger authentication allows you to include authentication information in a scaled object or a scaled job that can be used by the associated containers. You can use trigger authentications to pass OpenShift Container Platform secrets, platform-native pod authentication mechanisms, environment variables, and so on.

You define a TriggerAuthentication object in the same namespace as the object that you want to scale. That trigger authentication can be used only by objects in that namespace.

Alternatively, to share credentials between objects in multiple namespaces, you can create a ClusterTriggerAuthentication object that can be used across all namespaces.

Trigger authentications and cluster trigger authentication use the same configuration. However, a cluster trigger authentication requires an additional kind parameter in the authentication reference of the scaled object.

Example trigger authentication with a secret
kind: TriggerAuthentication
apiVersion: keda.sh/v1alpha1
metadata:
  name: secret-triggerauthentication
  namespace: my-namespace (1)
spec:
  secretTargetRef: (2)
  - parameter: user-name (3)
    name: my-secret (4)
    key: USER_NAME (5)
  - parameter: password
    name: my-secret
    key: PASSWORD
1 Specifies the namespace of the object you want to scale.
2 Specifies that this trigger authentication uses a secret for authorization.
3 Specifies the authentication parameter to supply by using the secret.
4 Specifies the name of the secret to use.
5 Specifies the key in the secret to use with the specified parameter.
Example cluster trigger authentication with a secret
kind: ClusterTriggerAuthentication
apiVersion: keda.sh/v1alpha1
metadata: (1)
  name: secret-cluster-triggerauthentication
spec:
  secretTargetRef: (2)
  - parameter: user-name (3)
    name: secret-name (4)
    key: user-name (5)
  - parameter: password
    name: secret-name
    key: user-name
1 Note that no namespace is used with a cluster trigger authentication.
2 Specifies that this trigger authentication uses a secret for authorization.
3 Specifies the authentication parameter to supply by using the secret.
4 Specifies the name of the secret to use.
5 Specifies the key in the secret to use with the specified parameter.
Example trigger authentication with a token
kind: TriggerAuthentication
apiVersion: keda.sh/v1alpha1
metadata:
  name: token-triggerauthentication
  namespace: my-namespace (1)
spec:
  secretTargetRef: (2)
  - parameter: bearerToken (3)
    name: my-token-2vzfq (4)
    key: token (5)
  - parameter: ca
    name: my-token-2vzfq
    key: ca.crt
1 Specifies the namespace of the object you want to scale.
2 Specifies that this trigger authentication uses a secret for authorization.
3 Specifies the authentication parameter to supply by using the token.
4 Specifies the name of the token to use.
5 Specifies the key in the token to use with the specified parameter.
Example trigger authentication with an environment variable
kind: TriggerAuthentication
apiVersion: keda.sh/v1alpha1
metadata:
  name: env-var-triggerauthentication
  namespace: my-namespace (1)
spec:
  env: (2)
  - parameter: access_key (3)
    name: ACCESS_KEY (4)
    containerName: my-container (5)
1 Specifies the namespace of the object you want to scale.
2 Specifies that this trigger authentication uses environment variables for authorization.
3 Specify the parameter to set with this variable.
4 Specify the name of the environment variable.
5 Optional: Specify a container that requires authentication. The container must be in the same resource as referenced by scaleTargetRef in the scaled object.
Example trigger authentication with pod authentication providers
kind: TriggerAuthentication
apiVersion: keda.sh/v1alpha1
metadata:
  name: pod-id-triggerauthentication
  namespace: my-namespace (1)
spec:
  podIdentity: (2)
    provider: aws-eks (3)
1 Specifies the namespace of the object you want to scale.
2 Specifies that this trigger authentication uses a platform-native pod authentication method for authorization.
3 Specifies a pod identity. Supported values are none, azure, aws-eks, or aws-kiam. The default is none.
Additional resources

Using trigger authentications

You use trigger authentications and cluster trigger authentications by using a custom resource to create the authentication, then add a reference to a scaled object or scaled job.

Prerequisites
  • The Custom Metrics Autoscaler Operator must be installed.

  • If you are using a secret, the Secret object must exist, for example:

    Example secret
    apiVersion: v1
    kind: Secret
    metadata:
      name: my-secret
    data:
      user-name: <base64_username>
      password: <base64_password>
Procedure
  1. Create the TriggerAuthentication or ClusterTriggerAuthentication object.

    1. Create a YAML file that defines the object:

      Example trigger authentication with a secret
      kind: TriggerAuthentication
      apiVersion: keda.sh/v1alpha1
      metadata:
        name: prom-triggerauthentication
        namespace: my-namespace
      spec:
        secretTargetRef:
        - parameter: user-name
          name: my-secret
          key: USER_NAME
        - parameter: password
          name: my-secret
          key: PASSWORD
    2. Create the TriggerAuthentication object:

      $ oc create -f <file-name>.yaml
  2. Create or edit a ScaledObject YAML file:

    Example scaled object
    apiVersion: keda.sh/v1alpha1
    kind: ScaledObject
    metadata:
      name: scaledobject
      namespace: my-namespace
    spec:
      scaleTargetRef:
        name: example-deployment
      maxReplicaCount: 100
      minReplicaCount: 0
      pollingInterval: 30
      triggers:
      - authenticationRef:
        type: prometheus
        metadata:
          serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092
          namespace: kedatest # replace <NAMESPACE>
          metricName: http_requests_total
          threshold: '5'
          query: sum(rate(http_requests_total{job="test-app"}[1m]))
          authModes: "basic"
        - authenticationRef: (1)
            name: prom-triggerauthentication
          metadata:
            name: prom-triggerauthentication
          type: object
        - authenticationRef: (2)
            name: prom-cluster-triggerauthentication
            kind: ClusterTriggerAuthentication
          metadata:
            name: prom-cluster-triggerauthentication
          type: object
    1 Optional: Specify a trigger authentication.
    2 Optional: Specify a cluster trigger authentication. You must include the kind: ClusterTriggerAuthentication parameter.

    It is not necessary to specify both a namespace trigger authentication and a cluster trigger authentication.

  3. Create the object. For example:

    $ oc apply -f <file-name>

Configuring the custom metrics autoscaler to use OpenShift Container Platform monitoring

You can use the installed OpenShift Container Platform monitoring as a source for the metrics used by the custom metrics autoscaler. However, there are some additional configurations you must perform.

These steps are not required for an external Prometheus source.

You must perform the following tasks, as described in this section:

  • Create a service account to get a token.

  • Create a role.

  • Add that role to the service account.

  • Reference the token in the trigger authentication object used by Prometheus.

Prerequisites
  • OpenShift Container Platform monitoring must be installed.

  • Monitoring of user-defined workloads must be enabled in OpenShift Container Platform monitoring, as described in the Creating a user-defined workload monitoring config map section.

  • The Custom Metrics Autoscaler Operator must be installed.

Procedure
  1. Change to the project with the object you want to scale:

    $ oc project my-project
  2. Use the following command to create a service account, if your cluster does not have one:

    $ oc create serviceaccount <service_account>

    where:

    <service_account>

    Specifies the name of the service account.

  3. Use the following command to locate the token assigned to the service account:

    $ oc describe serviceaccount <service_account>

    where:

    <service_account>

    Specifies the name of the service account.

    Example output
    Name:                thanos
    Namespace:           my-project
    Labels:              <none>
    Annotations:         <none>
    Image pull secrets:  thanos-dockercfg-nnwgj
    Mountable secrets:   thanos-dockercfg-nnwgj
    Tokens:              thanos-token-9g4n5 (1)
    Events:              <none>
    
    1 Use this token in the trigger authentication.
  4. Create a trigger authentication with the service account token:

    1. Create a YAML file similar to the following:

      apiVersion: keda.sh/v1alpha1
      kind: TriggerAuthentication
      metadata:
        name: keda-trigger-auth-prometheus
      spec:
        secretTargetRef: (1)
        - parameter: bearerToken (2)
          name: thanos-token-9g4n5 (3)
          key: token (4)
        - parameter: ca
          name: thanos-token-9g4n5
          key: ca.crt
      1 Specifies that this object uses a secret for authorization.
      2 Specifies the authentication parameter to supply by using the token.
      3 Specifies the name of the token to use.
      4 Specifies the key in the token to use with the specified parameter.
    2. Create the CR object:

      $ oc create -f <file-name>.yaml
  5. Create a role for reading Thanos metrics:

    1. Create a YAML file with the following parameters:

      apiVersion: rbac.authorization.k8s.io/v1
      kind: Role
      metadata:
        name: thanos-metrics-reader
      rules:
      - apiGroups:
        - ""
        resources:
        - pods
        verbs:
        - get
      - apiGroups:
        - metrics.k8s.io
        resources:
        - pods
        - nodes
        verbs:
        - get
        - list
        - watch
    2. Create the CR object:

      $ oc create -f <file-name>.yaml
  6. Create a role binding for reading Thanos metrics:

    1. Create a YAML file similar to the following:

      apiVersion: rbac.authorization.k8s.io/v1
      kind: RoleBinding
      metadata:
        name: thanos-metrics-reader (1)
        namespace: my-project (2)
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: Role
        name: thanos-metrics-reader
      subjects:
      - kind: ServiceAccount
        name: thanos (3)
        namespace: my-project (4)
      1 Specifies the name of the role you created.
      2 Specifies the namespace of the object you want to scale.
      3 Specifies the name of the service account to bind to the role.
      4 Specifies the namespace of the object you want to scale.
    2. Create the CR object:

      $ oc create -f <file-name>.yaml

You can now deploy a scaled object or scaled job to enable autoscaling for your application, as described in the following sections. To use OpenShift Container Platform monitoring as the source, in the trigger, or scaler, specify the prometheus type and use https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 as the serverAddress.

Additional resources

Understanding how to add custom metrics autoscalers

To add a custom metrics autoscaler, create a ScaledObject custom resource for a deployment, stateful set, or custom resource. Create a ScaledJob custom resource for a job.

You can create only one scaled object or scaled job for each workload that you want to scale. Also, you cannot use a scaled object or scaled job and the horizontal pod autoscaler (HPA) on the same workload.

Adding a custom metrics autoscaler to a workload

You can create a custom metrics autoscaler for a workload created by a Deployment, StatefulSet, or custom resource` object.

Prerequisites
  • The Custom Metrics Autoscaler Operator must be installed.

Procedure
  1. Create a YAML file similar to the following:

    Example scaled object
    apiVersion: keda.sh/v1alpha1
    kind: ScaledObject
    metadata:
      name: scaledobject
      namespace: my-namespace
    spec:
      scaleTargetRef:
        api: apps/v1 (1)
        name: example-deployment (2)
        kind: Deployment (3)
        envSourceContainerName: .spec.template.spec.containers[0] (4)
      cooldownPeriod:  200 (5)
      maxReplicaCount: 100 (6)
      minReplicaCount: 0 (7)
      pollingInterval: 30 (8)
      advanced:
        restoreToOriginalReplicaCount: false (9)
        horizontalPodAutoscalerConfig:
          behavior: (10)
            scaleDown:
              stabilizationWindowSeconds: 300
              policies:
              - type: Percent
                value: 100
                periodSeconds: 15
      triggers:
      - type: prometheus (11)
        metadata:
          serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092
          namespace: kedatest
          metricName: http_requests_total
          threshold: '5'
          query: sum(rate(http_requests_total{job="test-app"}[1m]))
          authModes: "basic"
      - authenticationRef: (12)
          name: prom-triggerauthentication