CPU utilization
Azure Red Hat OpenShift 3.11 will be retired 30 June 2022. Support for creation of new Azure Red Hat OpenShift 3.11 clusters continues through 30 November 2020. Following retirement, remaining Azure Red Hat OpenShift 3.11 clusters will be shut down to prevent security vulnerabilities.
Follow this guide to create an Azure Red Hat OpenShift 4 cluster. If you have specific questions, please contact us
A horizontal pod autoscaler, defined by a HorizontalPodAutoscaler
object,
specifies how the system should automatically increase or decrease the scale of
a replication controller or deployment configuration, based on metrics collected
from the pods that belong to that replication controller or deployment
configuration.
The following metrics are supported by horizontal pod autoscalers:
Metric | Description | API version |
---|---|---|
CPU utilization |
Percentage of the requested CPU |
|
Memory utilization |
Percentage of the requested memory. |
|
You can create a horizontal pod autoscaler to specify the minimum and maximum number of pods you want to run, as well as the CPU utilization or memory utilization your pods should target.
After a horizontal pod autoscaler is created, it begins attempting to query Heapster for metrics on the pods. It may take one to two minutes before Heapster obtains the initial metrics.
After metrics are available in Heapster, the horizontal pod autoscaler computes the ratio of the current metric utilization with the desired metric utilization, and scales up or down accordingly. The scaling occurs at a regular interval, but it might take one to two minutes before metrics make their way into Heapster.
For replication controllers, this scaling corresponds directly to the replicas
of the replication controller. For deployment configurations, scaling corresponds
directly to the replica count of the deployment configuration. Note that autoscaling
applies only to the latest deployment in the Complete
phase.
When autoscaling for CPU utilization, you can use the oc autoscale
command and specify the maximum number of pods
you want to run at any given time and the average CPU utilization your pods should target. You can optionally specify the minimum number
of pods, otherwise pods are given default values from the Azure Red Hat OpenShift server.
For example:
$ oc autoscale dc/frontend --max 10 --cpu-percent=80 deploymentconfig "frontend" autoscaled
The example command creates a horizontal pod autoscaler for an existing DeploymentConfig with the following definition:
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: frontend (1)
spec:
scaleTargetRef:
apiVersion: apps.openshift.io/v1 (2)
kind: DeploymentConfig (3)
name: frontend (4)
subresource: scale
minReplicas: 1 (5)
maxReplicas: 10 (6)
targetCPUUtilizationPercentage: 80 (7)
1 | The name of this horizontal pod autoscaler object. |
2 | The API version of the object to scale:
|
3 | The kind of object to scale, either ReplicationController or DeploymentConfig . |
4 | The name of an existing object you want to scale. |
5 | The minimum number of replicas when scaling down. The default is 1 . |
6 | The maximum number of replicas when scaling up. |
7 | The percentage of the requested CPU that each pod should ideally be using. |
Alternatively, the oc autoscale
command creates a horizontal pod autoscaler
with the following definition when using the v2beta1
version of the horizontal pod
autoscaler:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: hpa-resource-metrics-cpu (1)
spec:
scaleTargetRef:
apiVersion: v1 (2)
kind: ReplicationController (3)
name: hello-hpa-cpu (4)
minReplicas: 1 (5)
maxReplicas: 10 (6)
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 50 (7)
1 | The name of this horizontal pod autoscaler object. |
2 | The API version of the object to scale:
|
3 | The kind of object to scale, either ReplicationController or DeploymentConfig . |
4 | The name of an existing object you want to scale. |
5 | The minimum number of replicas when scaling down. The default is 1 . |
6 | The maximum number of replicas when scaling up. |
7 | The average percentage of the requested CPU that each pod should be using. |
Unlike CPU-based autoscaling, memory-based autoscaling requires specifying the
autoscaler using YAML instead of using the oc autoscale
command. Optionally,
you can specify the minimum number of pods and the average memory utilization
your pods should target as well, otherwise those are given default values from
the Azure Red Hat OpenShift server.
Autoscaling for Memory Utilization is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information on Red Hat Technology Preview features support scope, see https://access.redhat.com/support/offerings/techpreview/. |
Memory-based autoscaling is only available with the |
To use memory-based autoscaling:
Enable memory-based autoscaling:
Add the following to your
cluster’s master-config.yaml
file:
...
apiServerArguments:
runtime-config:
- apis/autoscaling/v2beta1=true
...
Restart the Azure Red Hat OpenShift services:
$ master-restart api $ master-restart controllers
If necessary, get the name of the object you want to scale:
$ oc get dc NAME REVISION DESIRED CURRENT TRIGGERED BY frontend 1 5 0 config
Place the following in a file, such as hpa.yaml
:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: hpa-resource-metrics-memory (1)
spec:
scaleTargetRef:
apiVersion: apps.openshift.io/v1 (2)
kind: DeploymentConfig (3)
name: frontend (4)
minReplicas: 2 (5)
maxReplicas: 10 (6)
metrics:
- type: Resource
resource:
name: memory
targetAverageUtilization: 50 (7)
1 | The name of this horizontal pod autoscaler object. |
2 | The API version of the object to scale:
|
3 | The kind of object to scale, either ReplicationController or DeploymentConfig . |
4 | The name of an existing object you want to scale. |
5 | The minimum number of replicas when scaling down. The default is 1 . |
6 | The maximum number of replicas when scaling up. |
7 | The average percentage of the requested memory that each pod should be using. |
Then, create the autoscaler from the above file:
$ oc create -f hpa.yaml
For memory-based autoscaling to work, memory usage must increase and decrease proportionally to the replica count. On average:
Use the OpenShift web console to check the memory behavior of your application and ensure that your application meets these requirements before using memory-based autoscaling. |
To view the status of a horizontal pod autoscaler:
Use the oc get
command to view information on the CPU utilization and pod limits:
$ oc get hpa/hpa-resource-metrics-cpu NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE hpa-resource-metrics-cpu DeploymentConfig/default/frontend/scale 80% 79% 1 10 8d
The output includes the following:
Target. The targeted average CPU utilization across all pods controlled by the deployment configuration.
Current. The current CPU utilization across all pods controlled by the deployment configuration.
Minpods/Maxpods. The minimum and maximum number of replicas that can be set by the autoscaler.
Use the oc describe
command for detailed information on the horizontal pod autoscaler object.
$ oc describe hpa/hpa-resource-metrics-cpu Name: hpa-resource-metrics-cpu Namespace: default Labels: <none> CreationTimestamp: Mon, 26 Oct 2015 21:13:47 -0400 Reference: DeploymentConfig/default/frontend/scale Target CPU utilization: 80% (1) Current CPU utilization: 79% (2) Min replicas: 1 (3) Max replicas: 4 (4) ReplicationController pods: 1 current / 1 desired Conditions: (5) Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_requests ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range Events:
1 | The average percentage of the requested memory that each pod should be using. |
2 | The current CPU utilization across all pods controlled by the deployment configuration. |
3 | The minimum number of replicas to scale down to. |
4 | The maximum number of replicas to scale up to. |
5 | If the object used the v2alpha1 API, status conditions are displayed. |
You can use the status conditions set to determine whether or not the horizontal pod autoscaler is able to scale and whether or not it is currently restricted in any way.
The horizontal pod autoscaler status conditions are available with the |
The following status conditions are set:
AbleToScale
indicates whether the horizontal pod autoscaler is able to fetch and update scales, and
whether any backoff conditions are preventing scaling.
A True
condition indicates scaling is allowed.
A False
condition indicates scaling is not allowed for the reason specified.
ScalingActive
indicates whether the horizontal pod autoscaler is enabled (the replica count of the target is not zero) and
is able to calculate desired scales.
A True
condition indicates metrics is working properly.
A False
condition generally indicates a problem with fetching metrics.
ScalingLimited
indicates that autoscaling is not allowed because a maximum or minimum replica count was reached.
A True
condition indicates that you need to raise or lower the minimum or maximum replica count in order to scale.
A False
condition indicates that the requested scaling is allowed.
To see the conditions affecting a horizontal pod autoscaler, use oc describe hpa
. Conditions appear in the status.conditions
field:
$ oc describe hpa cm-test Name: cm-test Namespace: prom Labels: <none> Annotations: <none> CreationTimestamp: Fri, 16 Jun 2017 18:09:22 +0000 Reference: ReplicationController/cm-test Metrics: ( current / target ) "http_requests" on pods: 66m / 500m Min replicas: 1 Max replicas: 4 ReplicationController pods: 1 current / 1 desired Conditions: (1) Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range Events:
1 | The horizontal pod autoscaler status messages.
|
The following is an example of a pod that is unable to scale:
Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale False FailedGetScale the HPA controller was unable to get the target's current scale: replicationcontrollers/scale.extensions "hello-hpa-cpu" not found
The following is an example of a pod that could not obtain the needed metrics for scaling:
Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: unable to get metrics for resource cpu: no metrics returned from heapster
The following is an example of a pod where the requested autoscaling was less than the required minimums:
Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range Events: