CPU Utilization
A horizontal pod autoscaler, defined by a HorizontalPodAutoscaler
object,
specifies how the system should automatically increase or decrease the scale of
a replication controller or deployment configuration, based on metrics collected
from the pods that belong to that replication controller or deployment
configuration.
Horizontal pod autoscaling is supported starting in OpenShift Enterprise 3.1.1. |
In order to use horizontal pod autoscalers, your cluster administrator must have properly configured cluster metrics.
The following metrics are supported by horizontal pod autoscalers:
Metric | Description |
---|---|
CPU Utilization |
Percentage of the requested CPU |
After a horizontal pod autoscaler is created, it begins attempting to query Heapster for metrics on the pods. It may take one to two minutes before Heapster obtains the initial metrics.
After metrics are available in Heapster, the horizontal pod autoscaler computes the ratio of the current metric utilization with the desired metric utilization, and scales up or down accordingly. The scaling will occur at a regular interval, but it may take one to two minutes before metrics make their way into Heapster.
For replication controllers, this scaling corresponds directly to the replicas
of the replication controller. For deployment configurations, scaling corresponds
directly to the replica count of the deployment configuration. Note that autoscaling
applies only to the latest deployment in the Complete
phase.
To create a horizontal pod autoscaler, first define it in a file. For example:
apiVersion: extensions/v1beta1
kind: HorizontalPodAutoscaler
metadata:
name: frontend-scaler (1)
spec:
scaleRef:
kind: DeploymentConfig (2)
name: frontend (3)
apiVersion: v1 (4)
subresource: scale
minReplicas: 1 (5)
maxReplicas: 10 (6)
cpuUtilization:
targetPercentage: 80 (7)
1 | The name of this horizontal pod autoscaler object |
2 | The kind of object to scale |
3 | The name of the object to scale |
4 | The API version of the object to scale |
5 | The minimum number of replicas to which to scale down |
6 | The maximum number of replicas to which to scale up |
7 | The percentage of the requested CPU that each pod should ideally be using |
Save your definition to a file, such as scaler.yaml, then use the CLI to create the object:
$ oc create -f scaler.yaml
To view the status of a horizontal pod autoscaler:
$ oc get hpa NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE frontend-scaler DeploymentConfig/default/frontend/scale 80% 79% 1 10 8d $ oc describe hpa frontend-scaler Name: frontend-scaler Namespace: default Labels: <none> CreationTimestamp: Mon, 26 Oct 2015 21:13:47 -0400 Reference: DeploymentConfig/default/frontend/scale Target CPU utilization: 80% Current CPU utilization: 79% Min pods: 1 Max pods: 10