Azure Red Hat OpenShift 3.11 will be retired 30 June 2022. Support for creation of new Azure Red Hat OpenShift 3.11 clusters continues through 30 November 2020. Following retirement, remaining Azure Red Hat OpenShift 3.11 clusters will be shut down to prevent security vulnerabilities.
Follow this guide to create an Azure Red Hat OpenShift 4 cluster. If you have specific questions, please contact us
This section presents a summary of how Azure Red Hat OpenShift specific processes work. These descriptions do not cover all edge cases and are limited to presenting the expected flow.
A new release might contain new virtual machine (VM) images or container images. This process is done in the background without customer involvement or any interruption to customer applications running on the cluster. The Azure Red Hat OpenShift resource provider completes all actions. The decision about which components, like nodes or Pods, need updating is based on comparing the current hash of a component with the desired hash value.
After the resource provider switches the cluster into the appropriate provisioning state, it performs a back up of etcd. The process continues by generating all required configuration and template files and placing them in their appropriate storage accounts.
The master nodes update first. The resource provider checks if the current hash of master-000000 matches the desired hash value. If the values match, the master remains unchanged. If the values do not match, the resource provider updates the master by draining, stopping, and updating it to the desired VM; the master then restarts.
The process repeats with the other two masters in order, so that only a single master stops at a time.
Next the infra nodes update. In normal update flow, the resource provider creates a new ss-infra-* scale set with the new VM image. The old ss-infra-* scale set remains at N=3 instances.
During the update process, the new scale set extends by one instance. After verification that the new VM started properly and is available, the old scale set scales down by one instance. Before removing the old instance, all Pods running on it are drained.
This cycle of adding, draining, and removing repeats until the old scale set has no instances left. The old scale set is then removed.
The compute nodes update using the same process as the infra nodes, the only difference being the node type and instance count.