$ oc get nodes -l node-role.kubernetes.io/master
This document describes the process to restart your cluster after a graceful shutdown.
Even though the cluster is expected to be functional after the restart, the cluster might not recover due to unexpected conditions, for example:
etcd data corruption during shutdown
Node failure due to hardware
Network connectivity issues
If your cluster fails to recover, follow the steps to restore to a previous cluster state.
You have gracefully shut down your cluster.
You can restart your cluster after it has been shut down gracefully.
You have access to the cluster as a user with the cluster-admin
role.
This procedure assumes that you gracefully shut down the cluster.
Power on any cluster dependencies, such as external storage or an LDAP server.
Start all cluster machines.
Use the appropriate method for your cloud environment to start the machines, for example, from your cloud provider’s web console.
Wait approximately 10 minutes before continuing to check the status of control plane nodes.
Verify that all control plane nodes are ready.
$ oc get nodes -l node-role.kubernetes.io/master
The control plane nodes are ready if the status is Ready
, as shown in the following output:
NAME STATUS ROLES AGE VERSION
ip-10-0-168-251.ec2.internal Ready control-plane,master 75m v1.29.4
ip-10-0-170-223.ec2.internal Ready control-plane,master 75m v1.29.4
ip-10-0-211-16.ec2.internal Ready control-plane,master 75m v1.29.4
If the control plane nodes are not ready, then check whether there are any pending certificate signing requests (CSRs) that must be approved.
Get the list of current CSRs:
$ oc get csr
Review the details of a CSR to verify that it is valid:
$ oc describe csr <csr_name> (1)
1 | <csr_name> is the name of a CSR from the list of current CSRs. |
Approve each valid CSR:
$ oc adm certificate approve <csr_name>
After the control plane nodes are ready, verify that all worker nodes are ready.
$ oc get nodes -l node-role.kubernetes.io/worker
The worker nodes are ready if the status is Ready
, as shown in the following output:
NAME STATUS ROLES AGE VERSION
ip-10-0-179-95.ec2.internal Ready worker 64m v1.29.4
ip-10-0-182-134.ec2.internal Ready worker 64m v1.29.4
ip-10-0-250-100.ec2.internal Ready worker 64m v1.29.4
If the worker nodes are not ready, then check whether there are any pending certificate signing requests (CSRs) that must be approved.
Get the list of current CSRs:
$ oc get csr
Review the details of a CSR to verify that it is valid:
$ oc describe csr <csr_name> (1)
1 | <csr_name> is the name of a CSR from the list of current CSRs. |
Approve each valid CSR:
$ oc adm certificate approve <csr_name>
After the control plane and worker nodes are ready, mark all the nodes in the cluster as schedulable. Run the following command:
for node in $(oc get nodes -o jsonpath='{.items[*].metadata.name}'); do echo ${node} ; oc adm uncordon ${node} ; done
Verify that the cluster started properly.
Check that there are no degraded cluster Operators.
$ oc get clusteroperators
Check that there are no cluster Operators with the DEGRADED
condition set to True
.
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE
authentication 4.16.0 True False False 59m
cloud-credential 4.16.0 True False False 85m
cluster-autoscaler 4.16.0 True False False 73m
config-operator 4.16.0 True False False 73m
console 4.16.0 True False False 62m
csi-snapshot-controller 4.16.0 True False False 66m
dns 4.16.0 True False False 76m
etcd 4.16.0 True False False 76m
...
Check that all nodes are in the Ready
state:
$ oc get nodes
Check that the status for all nodes is Ready
.
NAME STATUS ROLES AGE VERSION
ip-10-0-168-251.ec2.internal Ready control-plane,master 82m v1.29.4
ip-10-0-170-223.ec2.internal Ready control-plane.master 82m v1.29.4
ip-10-0-179-95.ec2.internal Ready worker 70m v1.29.4
ip-10-0-182-134.ec2.internal Ready worker 70m v1.29.4
ip-10-0-211-16.ec2.internal Ready control-plane,master 82m v1.29.4
ip-10-0-250-100.ec2.internal Ready worker 69m v1.29.4
If the cluster did not start properly, you might need to restore your cluster using an etcd backup.
See Restoring to a previous cluster state for how to use an etcd backup to restore if your cluster failed to recover after restarting.