/etc/origin/master/master-config.yaml.<timestamp> /var/lib/etcd/openshift-backup-pre-upgrade-<timestamp>
Following an OpenShift Container Platform upgrade, it may be desirable in extreme cases to downgrade your cluster to a previous version. The following sections outline the required steps for each system in a cluster to perform such a downgrade for the OpenShift Container Platform 3.6 to 3.5 downgrade path.
These steps are currently only supported for RPM-based installations of OpenShift Container Platform and assumes downtime of the entire cluster. |
The Ansible playbook used during the upgrade process should have created a backup of the master-config.yaml file and the etcd data directory. Ensure these exist on your masters and etcd members:
/etc/origin/master/master-config.yaml.<timestamp> /var/lib/etcd/openshift-backup-pre-upgrade-<timestamp>
Also, back up the node-config.yaml file on each node (including masters, which have the node component on them) with a timestamp:
/etc/origin/node/node-config.yaml.<timestamp>
If you use a separate etcd cluster instead of a single embedded etcd instance, the backup is likely created on all etcd members, though only one is required for the recovery process. You can run a separate etcd instance that is co-located with your master nodes.
The RPM downgrade process in a later step should create .rpmsave backups of the following files, but it may be a good idea to keep a separate copy regardless:
/etc/sysconfig/atomic-openshift-master /etc/sysconfig/atomic-openshift-master-api /etc/sysconfig/atomic-openshift-master-controller /etc/etcd/etcd.conf (1)
1 | Only required if using a separate etcd cluster. |
On all masters, nodes, and etcd members, if you use a separate etcd cluster that runs on different nodes, ensure the relevant services are stopped.
# systemctl stop atomic-openshift-master-api atomic-openshift-master-controllers
On all master and node hosts:
# systemctl stop atomic-openshift-node
On any etcd hosts for a separate etcd cluster:
# systemctl stop etcd
The *-excluder packages add entries to the exclude directive in the host’s
/etc/yum.conf file when installed. Run the following command on each host to
remove the atomic-openshift-*
and docker
packages from the exclude list:
# atomic-openshift-excluder unexclude # atomic-openshift-docker-excluder unexclude
On all masters, nodes, and etcd members, if you use a separate etcd cluster that runs on different nodes, remove the following packages:
# yum remove atomic-openshift \ atomic-openshift-clients \ atomic-openshift-node \ atomic-openshift-master-api \ atomic-openshift-master-controllers \ openvswitch \ atomic-openshift-sdn-ovs \ tuned-profiles-atomic-openshift-node\ atomic-openshift-excluder\ atomic-openshift-docker-excluder
If you use a separate etcd cluster, also remove the etcd package:
# yum remove etcd
If using the embedded etcd, leave the etcd package installed. It is required
for running the etcdctl
command to issue operations in later steps.
Both OpenShift Container Platform 3.5 and 3.6 require Docker 1.12, so Docker does not need to be downgraded.
Disable the OpenShift Container Platform 3.6 repositories, and re-enable the 3.5 repositories:
# subscription-manager repos \ --disable=rhel-7-server-ose-3.6-rpms \ --enable=rhel-7-server-ose-3.5-rpms
On each master, install the following packages:
# yum install atomic-openshift \ atomic-openshift-clients \ atomic-openshift-node \ atomic-openshift-master-api \ atomic-openshift-master-controllers \ openvswitch \ atomic-openshift-sdn-ovs \ tuned-profiles-atomic-openshift-node \ atomic-openshift-excluder \ atomic-openshift-docker-excluder
On each node, install the following packages:
# yum install atomic-openshift \ atomic-openshift-node \ openvswitch \ atomic-openshift-sdn-ovs \ tuned-profiles-atomic-openshift-node \ atomic-openshift-excluder \ atomic-openshift-docker-excluder
If you use a separate etcd cluster, install the following package on each etcd member:
# yum install etcd
See Backup and Restore.
See Backup and Restore.
To verify the downgrade, first check that all nodes are marked as Ready:
# oc get nodes NAME STATUS AGE master.example.com Ready,SchedulingDisabled 165d node1.example.com Ready 165d node2.example.com Ready 165d
Then, verify that you are running the expected versions of the docker-registry and router images, if deployed:
# oc get -n default dc/docker-registry -o json | grep \"image\" "image": "openshift3/ose-docker-registry:v3.5.5.31", # oc get -n default dc/router -o json | grep \"image\" "image": "openshift3/ose-haproxy-router:v3.5.5.31",
You can use the diagnostics tool on the master to look for common issues and provide suggestions:
# oc adm diagnostics ... [Note] Summary of diagnostics execution: [Note] Completed with no errors or warnings seen.