×

Overview

Each container running on a node consumes compute resources.

When authoring your pod, you can optionally specify how much CPU and memory (RAM) each container needs in order to better schedule pods in the cluster and ensure satisfactory performance.

Compute resource units

Compute resources are measurable quantities which can be requested, allocated, and consumed.

CPU is measured in units called millicores. Each node in the cluster introspects the operating system to determine the amount of CPU cores on the node and then multiples that value by 1000 to express its total capacity. For example, if a node has 2 cores, the node’s CPU capacity would be represented as 2000m. If you wanted to use a 1/10 of a single core, you would represent that as 100m.

Memory is measured in bytes. In addition, it may be used with SI suffices (E, P, T, G, M, K, m) or their power-of-two-equivalents (Ei, Pi, Ti, Gi, Mi, Ki).

CPU requests

Each container in a pod may specify the amount of CPU it requests on a node. CPU requests are used by the scheduler to find a node with an appropriate fit for your container. The CPU request represents a minimum amount of CPU that your container may consume, but if there is no contention for CPU, it may burst to use as much CPU as is available on the node. If there is CPU contention on the node, CPU requests provide a relative weight across all containers on the system for how much CPU time the container may use. On the node, CPU requests map to Kernel CFS shares to enforce this behavior.

CPU limits

Each container in a pod may specify the amount of CPU it is limited to use on a node. CPU limits are used to control the maximum amount of CPU that your container may use independent of contention on the node. If a container attempts to use more than the specified limit, the system will throttle the container. This allows your container to have a consistent level of service independent of the number of pods scheduled to the node.

Memory requests

By default, a container is able to consume as much memory on the node as possible. In order to improve placement of pods in the cluster, it is recommended to specify the amount of memory required for a container to run. The scheduler will then take available node memory capacity into account prior to binding your pod to a node. A container is still able to consume as much memory on the node as possible even when specifying a request.

Memory limits

If you specify a memory limit, you can constrain the amount of memory your container can use. For example, if you specify a limit of 200Mi, your container will be limited to using that amount of memory on the node, and if it exceeds the specified memory limit, it will be terminated and potentially restarted dependent upon the container restart policy.

Quality of service tiers

A compute resource is classified with a quality of service based on the specified request and limit value.

A BestEffort quality of service is provided when a request and limit are not specified.

A Burstable quality of service is provided when a request is specified that is less than an optionally specified limit.

A Guaranteed quality of service is provided when a limit is specified that is equal to an optionally specified request.

A container may have a different quality of service for each compute resource. For example, a container can have Burstable CPU and Guaranteed memory qualities of service.

The quality of service has an impact on what happens if the resource is compressible or not.

CPU is a compressible resource. A BestEffort CPU container will be able to consume as much CPU as is available on a node with the lowest priority. A Burstable CPU container is guaranteed to get the minimum amount of CPU requested, but it may or may not get additional CPU time. Excess CPU resources are distributed based on the amount requested across all containers on the node. A Guaranteed CPU container is guaranteed to get the amount requested and no more even if there are additional CPU cycles available. This provides a consistent level of performance independent of other activity on the node.

Memory is an incompressible resource. A BestEffort memory container will be able to consume as much memory as is available on the node, but the scheduler is subject to placing that container on a node with too small of an amount of memory to meet a need. In addition, a BestEffort memory container has the greatest chance of being killed if there is an out of memory event on the node. A Burstable memory container will be scheduled on the node to get the amount of memory requested, but it may consume more. If there is an out of memory event on the node, containers of this type will be killed after BestEffort containers when attempting to recover memory. A Guaranteed memory container will get the amount of memory requested, but no more. In the event of a system out of memory event, it will only be killed if there are no more BestEffort or Burstable memory containers on the system.

Sample Pod File

pod.yaml

apiVersion: v1
kind: Pod
spec:
  containers:
  - image: nginx
    name: nginx
    resources:
      requests:
        cpu: 100m (1)
        memory: 200Mi (2)
      limits:
        cpu: 200m (3)
        memory: 400Mi (4)
1 The container requests 100m cpu.
2 The container requests 200Mi memory.
3 The container limits 200m cpu.
4 The container limits 400Mi memory.

Specifying compute resources via CLI

To specify compute resources via the CLI:

$ oc run nginx --image=nginx --limits=cpu=200m,memory=400Mi --requests=cpu=100m,memory=200Mi

Viewing compute resources

To view compute resources for a pod:

$ oc describe pod nginx-tfjxt
Name:       nginx-tfjxt
Namespace:      default
Image(s):     nginx
Node:       /
Labels:       run=nginx
Status:       Pending
Reason:
Message:
IP:
Replication Controllers:  nginx (1/1 replicas created)
Containers:
  nginx:
    Container ID:
    Image:    nginx
    Image ID:
    QoS Tier:
      cpu:  Burstable
      memory: Burstable
    Limits:
      cpu:  200m
      memory: 400Mi
    Requests:
      cpu:    100m
      memory:   200Mi
    State:    Waiting
    Ready:    False
    Restart Count:  0
    Environment Variables: