$ oc new-app openshift/deployment-example:v1 --name=example-green $ oc new-app openshift/deployment-example:v2 --name=example-blue
Deployment strategies provide a way for the application to evolve. Some strategies use DeploymentConfigs to make changes that are seen by users of all routes that resolve to the application. Other advanced strategies, such as the ones described in this section, use router features in conjunction with DeploymentConfigs to impact specific routes.
The most common route-based strategy is to use a blue-green deployment. The new version (the blue version) is brought up for testing and evaluation, while the users still use the stable version (the green version). When ready, the users are switched to the blue version. If a problem arises, you can switch back to the green version.
A common alternative strategy is to use A/B versions that are both active at the same time and some users use one version, and some users use the other version. This can be used for experimenting with user interface changes and other features to get user feedback. It can also be used to verify proper operation in a production context where problems impact a limited number of users.
A canary deployment tests the new version but when a problem is detected it quickly falls back to the previous version. This can be done with both of the above strategies.
The route-based deployment strategies do not scale the number of Pods in the services. To maintain desired performance characteristics the deployment configurations might have to be scaled.
In production environments, you can precisely control the distribution of traffic that lands on a particular shard. When dealing with large numbers of instances, you can use the relative scale of individual shards to implement percentage based traffic. That combines well with a proxy shard, which forwards or splits the traffic it receives to a separate service or application running elsewhere.
In the simplest configuration, the proxy forwards requests unchanged. In more complex setups, you can duplicate the incoming requests and send to both a separate cluster as well as to a local instance of the application, and compare the result. Other patterns include keeping the caches of a DR installation warm, or sampling incoming traffic for analysis purposes.
Any TCP (or UDP) proxy could be run under the desired shard. Use the oc scale
command to alter the relative number of instances serving requests under the
proxy shard. For more complex traffic management, consider customizing the
OpenShift Container Platform router with proportional balancing capabilities.
Applications that have new code and old code running at the same time must be careful to ensure that data written by the new code can be read and handled (or gracefully ignored) by the old version of the code. This is sometimes called schema evolution and is a complex problem.
This can take many forms: data stored on disk, in a database, in a temporary cache, or that is part of a user’s browser session. While most web applications can support rolling deployments, it is important to test and design your application to handle it.
For some applications, the period of time that old code and new code is running side by side is short, so bugs or some failed user transactions are acceptable. For others, the failure pattern may result in the entire application becoming non-functional.
One way to validate N-1 compatibility is to use an A/B deployment: run the old code and new code at the same time in a controlled way in a test environment, and verify that traffic that flows to the new deployment does not cause failures in the old deployment.
OpenShift Container Platform and Kubernetes give application instances time to shut down before removing them from load balancing rotations. However, applications must ensure they cleanly terminate user connections as well before they exit.
On shutdown, OpenShift Container Platform sends a TERM
signal to the processes in the
container. Application code, on receiving SIGTERM
, stop accepting new
connections. This ensures that load balancers route traffic to other active
instances. The application code then waits until all open connections are closed
(or gracefully terminate individual connections at the next opportunity) before
exiting.
After the graceful termination period expires, a process that has not exited is
sent the KILL
signal, which immediately ends the process. The
terminationGracePeriodSeconds
attribute of a Pod or Pod template controls the
graceful termination period (default 30 seconds) and may be customized per
application as necessary.
Blue-green deployments involve running two versions of an application at the same time and moving traffic from the in-production version (the green version) to the newer version (the blue version). You can use a Rolling strategy or switch services in a route.
Because many applications depend on persistent data, you must have an application that supports N-1 compatibility, which means it shares data and implements live migration between the database, store, or disk by creating two copies of the data layer.
Consider the data used in testing the new version. If it is the production data, a bug in the new version can break the production version.
Blue-green deployments use two DeploymentConfigs. Both are running, and the one in production depends on the service the route specifies, with each DeploymentConfig exposed to a different service.
Routes are intended for web (HTTP and HTTPS) traffic, so this technique is best suited for web applications. |
You can create a new route to the new version and test it. When ready, change the service in the production route to point to the new service and the new (blue) version is live.
If necessary, you can roll back to the older (green) version by switching the service back to the previous version.
Create two copies of the example application:
$ oc new-app openshift/deployment-example:v1 --name=example-green $ oc new-app openshift/deployment-example:v2 --name=example-blue
This creates two independent application components: one running the v1
image
under the example-green
service, and one using the v2
image under the
example-blue
service.
Create a route that points to the old service:
$ oc expose svc/example-green --name=bluegreen-example
Browse to the application at example-green.<project>.<router_domain>
to
verify you see the v1
image.
Edit the route and change the service name to example-blue
:
$ oc patch route/bluegreen-example -p '{"spec":{"to":{"name":"example-blue"}}}'
To verify that the route has changed, refresh the browser until you see the
v2
image.
The A/B deployment strategy lets you try a new version of the application in a limited way in the production environment. You can specify that the production version gets most of the user requests while a limited fraction of requests go to the new version.
Because you control the portion of requests to each version, as testing progresses you can increase the fraction of requests to the new version and ultimately stop using the previous version. As you adjust the request load on each version, the number of Pods in each service might have to be scaled as well to provide the expected performance.
In addition to upgrading software, you can use this feature to experiment with versions of the user interface. Since some users get the old version and some the new, you can evaluate the user’s reaction to the different versions to inform design decisions.
For this to be effective, both the old and new versions must be similar enough that both can run at the same time. This is common with bug fix releases and when new features do not interfere with the old. The versions require N-1 compatibility to properly work together.
OpenShift Container Platform supports N-1 compatibility through the web console as well as the CLI.
The user sets up a route with multiple services. Each service handles a version of the application.
Each service is assigned a weight
and the portion of requests to each service
is the service_weight
divided by the sum_of_weights
. The weight
for each
service is distributed to the service’s endpoints so that the sum of the
endpoint weights
is the service weight
.
The route can have up to four services. The weight
for the service can be
between 0
and 256
. When the weight
is 0
, the service does not participate in load-balancing
but continues to serve existing persistent connections. When the service weight
is not 0
, each endpoint has a minimum weight
of 1
. Because of this, a
service with a lot of endpoints can end up with higher weight
than desired.
In this case, reduce the number of Pods to get the desired load balance
weight
.
To set up the A/B environment:
Create the two applications and give them different names. Each creates a DeploymentConfig. The applications are versions of the same program; one is usually the current production version and the other the proposed new version:
$ oc new-app openshift/deployment-example --name=ab-example-a $ oc new-app openshift/deployment-example --name=ab-example-b
Both applications are deployed and services are created.
Make the application available externally via a route. At this point, you can expose either. It can be convenient to expose the current production version first and later modify the route to add the new version.
$ oc expose svc/ab-example-a
Browse to the application at ab-example-<project>.<router_domain>
to verify
that you see the desired version.
When you deploy the route, the router balances the traffic according to the
weights
specified for the services. At this point, there is a single service
with default weight=1
so all requests go to it. Adding the other service as an
alternateBackends
and adjusting the weights
brings the A/B setup to
life. This can be done by the oc set route-backends
command or by editing the
route.
Setting the oc set route-backend
to 0
means the service does not participate
in load-balancing, but continues to serve existing persistent connections.
Changes to the route just change the portion of traffic to the various services. You might have to scale the DeploymentConfigs to adjust the number of Pods to handle the anticipated loads. |
To edit the route, run:
$ oc edit route <route_name> ... metadata: name: route-alternate-service annotations: haproxy.router.openshift.io/balance: roundrobin spec: host: ab-example.my-project.my-domain to: kind: Service name: ab-example-a weight: 10 alternateBackends: - kind: Service name: ab-example-b weight: 15 ...
Navigate to the Route details page (Applications/Routes).
Select Edit from the Actions menu.
Check Split traffic across multiple services.
The Service Weights slider sets the percentage of traffic sent to each service.
For traffic split between more than two services, the relative weights are specified by integers between 0 and 256 for each service.
Traffic weightings are shown on the Overview in the expanded rows of the applications between which traffic is split.
To manage the services and corresponding weights load balanced by the route,
use the oc set route-backends
command:
$ oc set route-backends ROUTENAME \ [--zero|--equal] [--adjust] SERVICE=WEIGHT[%] [...] [options]
For example, the following sets ab-example-a
as the primary service with
weight=198
and ab-example-b
as the first alternate service with a
weight=2
:
$ oc set route-backends ab-example ab-example-a=198 ab-example-b=2
This means 99% of traffic is sent to service ab-example-a
and 1% to
service ab-example-b
.
This command does not scale the DeploymentConfigs. You might be required to do so to have enough Pods to handle the request load.
Run the command with no flags to verify the current configuration:
$ oc set route-backends ab-example NAME KIND TO WEIGHT routes/ab-example Service ab-example-a 198 (99%) routes/ab-example Service ab-example-b 2 (1%)
To alter the weight of an individual service relative to itself or to the
primary service, use the --adjust
flag. Specifying a percentage adjusts the
service relative to either the primary or the first alternate (if you specify
the primary). If there are other backends, their weights are kept proportional
to the changed.
For example:
$ oc set route-backends ab-example --adjust ab-example-a=200 ab-example-b=10 $ oc set route-backends ab-example --adjust ab-example-b=5% $ oc set route-backends ab-example --adjust ab-example-b=+15%
The --equal
flag sets the weight
of all services to 100
:
$ oc set route-backends ab-example --equal
The --zero
flag sets the weight
of all services to 0
. All requests then
return with a 503 error.
Not all routers may support multiple or weighted backends. |
Create a new application, adding a label ab-example=true
that will be common
to all shards:
$ oc new-app openshift/deployment-example --name=ab-example-a
The application is deployed and a service is created. This is the first shard.
Make the application available via a route (or use the service IP directly):
$ oc expose svc/ab-example-a --name=ab-example
Browse to the application at ab-example-<project>.<router_domain>
to verify
you see the v1
image.
Create a second shard based on the same source image and label as the first shard, but with a different tagged version and unique environment variables:
$ oc new-app openshift/deployment-example:v2 \ --name=ab-example-b --labels=ab-example=true \ SUBTITLE="shard B" COLOR="red"
At this point, both sets of Pods are being served under the route. However, because both browsers (by leaving a connection open) and the router (by default, through a cookie) attempt to preserve your connection to a back-end server, you might not see both shards being returned to you.
To force your browser to one or the other shard:
Use the oc scale
command to reduce replicas of ab-example-a
to 0
.
$ oc scale dc/ab-example-a --replicas=0
Refresh your browser to show v2
and shard B
(in red).
Scale ab-example-a
to 1
replica and ab-example-b
to 0
:
$ oc scale dc/ab-example-a --replicas=1; oc scale dc/ab-example-b --replicas=0
Refresh your browser to show v1
and shard A
(in blue).
If you trigger a deployment on either shard, only the Pods in that shard are
affected. You can trigger a deployment by changing the SUBTITLE
environment
variable in either DeploymentConfig:
$ oc edit dc/ab-example-a
or
$ oc edit dc/ab-example-b