$ oc rollout latest dc/<name>
DeploymentConfigs can be managed from the OpenShift Container Platform web console’s
Workloads page or using the oc
CLI. The following procedures show CLI usage
unless otherwise stated.
You can start a rollout to begin the deployment process of your application.
To start a new deployment process from an existing DeploymentConfig, run the following command:
$ oc rollout latest dc/<name>
If a deployment process is already in progress, the command displays a message and a new ReplicationController will not be deployed. |
You can view a deployment to get basic information about all the available revisions of your application.
To show details about all recently created ReplicationControllers for the provided DeploymentConfig, including any currently running deployment process, run the following command:
$ oc rollout history dc/<name>
To view details specific to a revision, add the --revision
flag:
$ oc rollout history dc/<name> --revision=1
For more detailed information about a deployment configuration and its latest
revision, use the oc describe
command:
$ oc describe dc <name>
If the current revision of your DeploymentConfig failed to deploy, you can restart the deployment process.
To restart a failed deployment process:
$ oc rollout retry dc/<name>
If the latest revision of it was deployed successfully, the command displays a message and the deployment process is not be retried.
Retrying a deployment restarts the deployment process and does not create a new deployment revision. The restarted ReplicationController has the same configuration it had when it failed. |
Rollbacks revert an application back to a previous revision and can be performed using the REST API, the CLI, or the web console.
To rollback to the last successful deployed revision of your configuration:
$ oc rollout undo dc/<name>
The DeploymentConfig’s template is reverted to match the deployment
revision specified in the undo command, and a new ReplicationController is
started. If no revision is specified with --to-revision
, then the last
successfully deployed revision is used.
Image change triggers on the DeploymentConfig are disabled as part of the rollback to prevent accidentally starting a new deployment process soon after the rollback is complete.
To re-enable the image change triggers:
$ oc set triggers dc/<name> --auto
DeploymentConfigs also support automatically rolling back to the last successful revision of the configuration in case the latest deployment process fails. In that case, the latest template that failed to deploy stays intact by the system and it is up to users to fix their configurations. |
You can add a command to a container, which modifies the container’s startup
behavior by overruling the image’s ENTRYPOINT
. This is different from a
lifecycle hook, which instead can be run once per deployment at a specified
time.
Add the command
parameters to the spec
field of the DeploymentConfig. You
can also add an args
field, which modifies the command
(or the ENTRYPOINT
if command
does not exist).
spec: containers: - name: <container_name> image: 'image' command: - '<command>' args: - '<argument_1>' - '<argument_2>' - '<argument_3>'
For example, to execute the java
command with the -jar
and
/opt/app-root/springboots2idemo.jar
arguments:
spec: containers: - name: example-spring-boot image: 'image' command: - java args: - '-jar' - /opt/app-root/springboots2idemo.jar
To stream the logs of the latest revision for a given DeploymentConfig:
$ oc logs -f dc/<name>
If the latest revision is running or failed, the command returns the logs of the process that is responsible for deploying your pods. If it is successful, it returns the logs from a Pod of your application.
You can also view logs from older failed deployment processes, if and only if these processes (old ReplicationControllers and their deployer Pods) exist and have not been pruned or deleted manually:
$ oc logs --version=1 dc/<name>
A DeploymentConfig can contain triggers, which drive the creation of new deployment processes in response to events inside the cluster.
If no triggers are defined on a DeploymentConfig, a |
The ConfigChange
trigger results in a new ReplicationController whenever
configuration changes are detected in the Pod template of the DeploymentConfig.
If a |
triggers:
- type: "ConfigChange"
The ImageChange
trigger results in a new ReplicationController whenever the
content of an imagestreamtag changes (when a new version of the image is
pushed).
triggers:
- type: "ImageChange"
imageChangeParams:
automatic: true (1)
from:
kind: "ImageStreamTag"
name: "origin-ruby-sample:latest"
namespace: "myproject"
containerNames:
- "helloworld"
1 | If the imageChangeParams.automatic field is set to false , the trigger is
disabled. |
With the above example, when the latest
tag value of the origin-ruby-sample
imagestream changes and the new image value differs from the current image
specified in the DeploymentConfig’s helloworld
container, a new
ReplicationController is created using the new image for the helloworld
container.
If an |
This resource is available only if a cluster administrator has enabled the ephemeral storage technology preview. This feature is disabled by default. |
A deployment is completed by a Pod that consumes resources (memory, CPU, and ephemeral storage) on a node. By default, Pods consume unbounded node resources. However, if a project specifies default container limits, then Pods consume resources up to those limits.
You can also limit resource use by specifying resource limits as part of the deployment strategy. Deployment resources can be used with the Recreate, Rolling, or Custom deployment strategies.
In the following example, each of resources
, cpu
, memory
, and
ephemeral-storage
is optional:
type: "Recreate"
resources:
limits:
cpu: "100m" (1)
memory: "256Mi" (2)
ephemeral-storage: "1Gi" (3)
1 | cpu is in CPU units: 100m represents 0.1 CPU units (100 * 1e-3). |
2 | memory is in bytes: 256Mi represents 268435456 bytes (256 * 2 ^ 20). |
3 | ephemeral-storage is in bytes: 1Gi represents 1073741824 bytes (2 ^ 30).
This applies only if your cluster administrator enabled the ephemeral storage
technology preview. |
However, if a quota has been defined for your project, one of the following two items is required:
A resources
section set with an explicit requests
:
type: "Recreate"
resources:
requests: (1)
cpu: "100m"
memory: "256Mi"
ephemeral-storage: "1Gi"
1 | The requests object contains the list of resources that correspond to
the list of resources in the quota. |
A limit range defined in your project, where the defaults from the LimitRange
object apply to Pods created during the deployment process.
To set deployment resources, choose one of the above options. Otherwise, deploy Pod creation fails, citing a failure to satisfy quota.
In addition to rollbacks, you can exercise fine-grained control over the number of replicas by manually scaling them.
Pods can also be autoscaled using the |
To manually scale a DeploymentConfig, use the oc scale
command. For example,
the following command sets the replicas in the frontend
DeploymentConfig to
3
.
$ oc scale dc frontend --replicas=3
The number of replicas eventually propagates to the desired and current
state of the deployment configured by the DeploymentConfig frontend
.
You can add a Secret to your DeploymentConfig so that it can access images from a private repository. This procedure shows the OpenShift Container Platform web console method.
Create a new project.
From the Workloads page, create a Secret that contains credentials for accessing a private image repository.
Create a DeploymentConfig.
On the DeploymentConfig editor page, set the Pull Secret and save your changes.
You can use node selectors in conjunction with labeled nodes to control Pod placement.
Cluster administrators can set the default node selector for a project in order to restrict Pod placement to specific nodes. As a developer, you can set a node selector on a Pod configuration to restrict nodes even further.
To add a node selector when creating a pod, edit the Pod configuration, and add
the nodeSelector
value. This can be added to a single Pod configuration, or in
a Pod template:
apiVersion: v1 kind: Pod spec: nodeSelector: disktype: ssd ...
Pods created when the node selector is in place are assigned to nodes with the specified labels. The labels specified here are used in conjunction with the labels added by a cluster administrator.
For example, if a project has the type=user-node
and region=east
labels
added to a project by the cluster administrator, and you add the above
disktype: ssd
label to a Pod, the Pod is only ever scheduled on nodes that
have all three labels.
Labels can only be set to one value, so setting a node selector of |
You can run a Pod with a service account other than the default.
Edit the DeploymentConfig:
$ oc edit dc/<deployment_config>
Add the serviceAccount
and serviceAccountName
parameters to the spec
field, and specify the service account you want to use:
spec: securityContext: {} serviceAccount: <service_account> serviceAccountName: <service_account>