You can specify which pods to group together, which topology domains they are spread among, and the acceptable skew.
The following examples demonstrate pod topology spread constraint configurations.
Example to distribute pods that match the specified labels based on their zone
apiVersion: v1
kind: Pod
metadata:
name: my-pod
labels:
region: us-east
spec:
securityContext:
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
topologySpreadConstraints:
- maxSkew: 1 (1)
topologyKey: topology.kubernetes.io/zone (2)
whenUnsatisfiable: DoNotSchedule (3)
labelSelector: (4)
matchLabels:
region: us-east (5)
matchLabelKeys:
- my-pod-label (6)
containers:
- image: "docker.io/ocpqe/hello-pod"
name: hello-pod
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: [ALL]
1 |
The maximum difference in number of pods between any two topology domains. The default is 1 , and you cannot specify a value of 0 . |
2 |
The key of a node label. Nodes with this key and identical value are considered to be in the same topology. |
3 |
How to handle a pod if it does not satisfy the spread constraint. The default is DoNotSchedule , which tells the scheduler not to schedule the pod. Set to ScheduleAnyway to still schedule the pod, but the scheduler prioritizes honoring the skew to not make the cluster more imbalanced. |
4 |
Pods that match this label selector are counted and recognized as a group when spreading to satisfy the constraint. Be sure to specify a label selector, otherwise no pods can be matched. |
5 |
Be sure that this Pod spec also sets its labels to match this label selector if you want it to be counted properly in the future. |
6 |
A list of pod label keys to select which pods to calculate spreading over. |
Example demonstrating a single pod topology spread constraint
kind: Pod
apiVersion: v1
metadata:
name: my-pod
labels:
region: us-east
spec:
securityContext:
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
topologySpreadConstraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
region: us-east
containers:
- image: "docker.io/ocpqe/hello-pod"
name: hello-pod
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: [ALL]
The previous example defines a Pod
spec with a one pod topology spread constraint. It matches on pods labeled region: us-east
, distributes among zones, specifies a skew of 1
, and does not schedule the pod if it does not meet these requirements.
Example demonstrating multiple pod topology spread constraints
kind: Pod
apiVersion: v1
metadata:
name: my-pod-2
labels:
region: us-east
spec:
securityContext:
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
topologySpreadConstraints:
- maxSkew: 1
topologyKey: node
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
region: us-east
- maxSkew: 1
topologyKey: rack
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
region: us-east
containers:
- image: "docker.io/ocpqe/hello-pod"
name: hello-pod
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: [ALL]
The previous example defines a Pod
spec with two pod topology spread constraints. Both match on pods labeled region: us-east
, specify a skew of 1
, and do not schedule the pod if it does not meet these requirements.
The first constraint distributes pods based on a user-defined label node
, and the second constraint distributes pods based on a user-defined label rack
. Both constraints must be met for the pod to be scheduled.