$ oc adm prune <object_type> <options>
Over time, API objects created in OpenShift Dedicated can accumulate in the cluster’s etcd data store through normal user operations, such as when building and deploying applications.
A user with the dedicated-admin
role can periodically prune older versions of objects from the
cluster that are no longer required. For example, by pruning images you can delete
older images and layers that are no longer in use, but are still taking up disk
space.
The CLI groups prune operations under a common parent command:
$ oc adm prune <object_type> <options>
This specifies:
The <object_type>
to perform the action on, such as groups
, builds
,
deployments
, or images
.
The <options>
supported to prune that object type.
To prune groups records from an external provider, administrators can run the following command:
$ oc adm prune groups \
--sync-config=path/to/sync/config [<options>]
Options | Description |
---|---|
|
Indicate that pruning should occur, instead of performing a dry-run. |
|
Path to the group blacklist file. |
|
Path to the group whitelist file. |
|
Path to the synchronization configuration file. |
To see the groups that the prune command deletes, run the following command:
$ oc adm prune groups --sync-config=ldap-sync-config.yaml
To perform the prune operation, add the --confirm
flag:
$ oc adm prune groups --sync-config=ldap-sync-config.yaml --confirm
You can prune resources associated with deployments that are no longer required by the system, due to age and status.
The following command prunes replication controllers associated with DeploymentConfig
objects:
$ oc adm prune deployments [<options>]
To also prune replica sets associated with |
Option | Description |
---|---|
|
Indicate that pruning should occur, instead of performing a dry-run. |
|
Per the |
|
Per the |
|
Do not prune any replication controller that is younger than |
|
Prune all replication controllers that no longer have a |
To see what a pruning operation would delete, run the following command:
$ oc adm prune deployments --orphans --keep-complete=5 --keep-failed=1 \
--keep-younger-than=60m
To actually perform the prune operation, add the --confirm
flag:
$ oc adm prune deployments --orphans --keep-complete=5 --keep-failed=1 \
--keep-younger-than=60m --confirm
To prune builds that are no longer required by the system due to age and status, administrators can run the following command:
$ oc adm prune builds [<options>]
Option | Description |
---|---|
|
Indicate that pruning should occur, instead of performing a dry-run. |
|
Prune all builds whose build configuration no longer exists, status is complete, failed, error, or canceled. |
|
Per build configuration, keep the last |
|
Per build configuration, keep the last |
|
Do not prune any object that is younger than |
To see what a pruning operation would delete, run the following command:
$ oc adm prune builds --orphans --keep-complete=5 --keep-failed=1 \
--keep-younger-than=60m
To actually perform the prune operation, add the --confirm
flag:
$ oc adm prune builds --orphans --keep-complete=5 --keep-failed=1 \
--keep-younger-than=60m --confirm
Developers can enable automatic build pruning by modifying their build configuration. |
Images from the OpenShift image registry that are no longer required by the system due to age, status, or exceed limits are automatically pruned. Cluster administrators can configure the Pruning Custom Resource, or suspend it.
You have access to an OpenShift Dedicated cluster using an account with dedicated-admin
permissions.
Install the oc
CLI.
Verify that the object named imagepruners.imageregistry.operator.openshift.io/cluster
contains the following spec
and status
fields:
spec:
schedule: 0 0 * * * (1)
suspend: false (2)
keepTagRevisions: 3 (3)
keepYoungerThanDuration: 60m (4)
keepYoungerThan: 3600000000000 (5)
resources: {} (6)
affinity: {} (7)
nodeSelector: {} (8)
tolerations: [] (9)
successfulJobsHistoryLimit: 3 (10)
failedJobsHistoryLimit: 3 (11)
status:
observedGeneration: 2 (12)
conditions: (13)
- type: Available
status: "True"
lastTransitionTime: 2019-10-09T03:13:45
reason: Ready
message: "Periodic image pruner has been created."
- type: Scheduled
status: "True"
lastTransitionTime: 2019-10-09T03:13:45
reason: Scheduled
message: "Image pruner job has been scheduled."
- type: Failed
staus: "False"
lastTransitionTime: 2019-10-09T03:13:45
reason: Succeeded
message: "Most recent image pruning job succeeded."
1 | schedule : CronJob formatted schedule. This is an optional field, default is daily at midnight. |
2 | suspend : If set to true , the CronJob running pruning is suspended. This is an optional field, default is false . The initial value on new clusters is false . |
3 | keepTagRevisions : The number of revisions per tag to keep. This is an optional field, default is 3 . The initial value is 3 . |
4 | keepYoungerThanDuration : Retain images younger than this duration. This is an optional field. If a value is not specified, either keepYoungerThan or the default value 60m (60 minutes) is used. |
5 | keepYoungerThan : Deprecated. The same as keepYoungerThanDuration , but the duration is specified as an integer in nanoseconds. This is an optional field. When keepYoungerThanDuration is set, this field is ignored. |
6 | resources : Standard pod resource requests and limits. This is an optional field. |
7 | affinity : Standard pod affinity. This is an optional field. |
8 | nodeSelector : Standard pod node selector. This is an optional field. |
9 | tolerations : Standard pod tolerations. This is an optional field. |
10 | successfulJobsHistoryLimit : The maximum number of successful jobs to retain. Must be >= 1 to ensure metrics are reported. This is an optional field, default is 3 . The initial value is 3 . |
11 | failedJobsHistoryLimit : The maximum number of failed jobs to retain. Must be >= 1 to ensure metrics are reported. This is an optional field, default is 3 . The initial value is 3 . |
12 | observedGeneration : The generation observed by the Operator. |
13 | conditions : The standard condition objects with the following types:
|
The Image Registry Operator’s behavior for managing the pruner is orthogonal to the However, the
|
Cron jobs can perform pruning of successful jobs, but might not properly handle failed jobs. Therefore, the cluster administrator should perform regular cleanup of jobs manually. They should also restrict the access to cron jobs to a small group of trusted users and set appropriate quota to prevent the cron job from creating too many jobs and pods.