$ curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/$VERSION/openshift-client-linux-$VERSION.tar.gz | tar zxvf - oc
After deploying an installer-provisioned OpenShift Container Platform cluster, you can use the following procedures to expand the number of worker nodes. Ensure that each prospective worker node meets the prerequisites.
Expanding the cluster using RedFish Virtual Media involves meeting minimum firmware requirements. See Firmware requirements for installing with virtual media in the Prerequisites section for additional details when expanding the cluster using RedFish Virtual Media. |
Expanding the cluster requires a DHCP server. Each node must have a DHCP reservation.
Reserving IP addresses so they become static IP addresses
Some administrators prefer to use static IP addresses so that each node’s IP address remains constant in the absence of a DHCP server. To use static IP addresses in the OpenShift Container Platform cluster, reserve the IP addresses in the DHCP server with an infinite lease. After the installer provisions the node successfully, the dispatcher script will check the node’s network configuration. If the dispatcher script finds that the network configuration contains a DHCP infinite lease, it will recreate the connection as a static IP connection using the IP address from the DHCP infinite lease. NICs without DHCP infinite leases will remain unmodified. Setting IP addresses with an infinite lease is incompatible with network configuration deployed by using the Machine Config Operator. |
Preparing the bare metal node requires executing the following procedure from the provisioner node.
Get the oc
binary, if needed. It should already exist on the provisioner node.
$ curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/$VERSION/openshift-client-linux-$VERSION.tar.gz | tar zxvf - oc
$ sudo cp oc /usr/local/bin
Power off the bare metal node by using the baseboard management controller, and ensure it is off.
Retrieve the user name and password of the bare metal node’s baseboard management controller. Then, create base64
strings from the user name and password:
$ echo -ne "root" | base64
$ echo -ne "password" | base64
Create a configuration file for the bare metal node.
$ vim bmh.yaml
---
apiVersion: v1
kind: Secret
metadata:
name: openshift-worker-<num>-bmc-secret
type: Opaque
data:
username: <base64-of-uid>
password: <base64-of-pwd>
---
apiVersion: metal3.io/v1alpha1
kind: BareMetalHost
metadata:
name: openshift-worker-<num>
spec:
online: true
bootMACAddress: <NIC1-mac-address>
bmc:
address: <protocol>://<bmc-ip>
credentialsName: openshift-worker-<num>-bmc-secret
Replace <num>
for the worker number of the bare metal node in the two name
fields and the credentialsName
field. Replace <base64-of-uid>
with the base64
string of the user name. Replace <base64-of-pwd>
with the base64
string of the password. Replace <NIC1-mac-address>
with the MAC address of the bare metal node’s first NIC.
See the BMC addressing section for additional BMC configuration options. Replace <protocol>
with the BMC protocol, such as IPMI, RedFish, or others.
Replace <bmc-ip>
with the IP address of the bare metal node’s baseboard management controller.
If the MAC address of an existing bare metal node matches the MAC address of a bare metal host that you are attempting to provision, then the Ironic installation will fail. If the host enrollment, inspection, cleaning, or other Ironic steps fail, the Bare Metal Operator retries the installation continuously. See Diagnosing a host duplicate MAC address for more information. |
Create the bare metal node.
$ oc -n openshift-machine-api create -f bmh.yaml
secret/openshift-worker-<num>-bmc-secret created
baremetalhost.metal3.io/openshift-worker-<num> created
Where <num>
will be the worker number.
Power up and inspect the bare metal node.
$ oc -n openshift-machine-api get bmh openshift-worker-<num>
Where <num>
is the worker node number.
NAME STATE CONSUMER ONLINE ERROR
openshift-worker-<num> ready true
Use the following procedure to replace an installer-provisioned OpenShift Container Platform control plane node.
If you reuse the Existing control plane |
You have access to the cluster as a user with the cluster-admin
role.
You have taken an etcd backup.
Take an etcd backup before performing this procedure so that you can restore your cluster if you encounter any issues. For more information about taking an etcd backup, see the Additional resources section. |
Ensure that the Bare Metal Operator is available:
$ oc get clusteroperator baremetal
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE
baremetal 4.9.0 True False False 3d15h
Remove the old BareMetalHost
and Machine
objects:
$ oc delete bmh -n openshift-machine-api <host_name>
$ oc delete machine -n openshift-machine-api <machine_name>
Replace <host_name>
with the name of the host and <machine_name>
with the name of the machine. The machine name appears under the CONSUMER
field.
After you remove the BareMetalHost
and Machine
objects, then the machine controller automatically deletes the Node
object.
Create the new BareMetalHost
object and the secret to store the BMC credentials:
$ cat <<EOF | oc apply -f -
apiVersion: v1
kind: Secret
metadata:
name: control-plane-<num>-bmc-secret (1)
namespace: openshift-machine-api
data:
username: <base64_of_uid> (2)
password: <base64_of_pwd> (3)
type: Opaque
---
apiVersion: metal3.io/v1alpha1
kind: BareMetalHost
metadata:
name: control-plane-<num> (1)
namespace: openshift-machine-api
spec:
automatedCleaningMode: disabled
bmc:
address: <protocol>://<bmc_ip> (4)
credentialsName: control-plane-<num>-bmc-secret (1)
bootMACAddress: <NIC1_mac_address> (5)
bootMode: UEFI
externallyProvisioned: false
hardwareProfile: unknown
online: true
EOF
1 | Replace <num> for the control plane number of the bare metal node in the name fields and the credentialsName field. |
2 | Replace <base64_of_uid> with the base64 string of the user name. |
3 | Replace <base64_of_pwd> with the base64 string of the password. |
4 | Replace <protocol> with the BMC protocol, such as redfish , redfish-virtualmedia , idrac-virtualmedia , or others. Replace <bmc_ip> with the IP address of the bare metal node’s baseboard management controller. For additional BMC configuration options, see "BMC addressing" in the Additional resources section. |
5 | Replace <NIC1_mac_address> with the MAC address of the bare metal node’s first NIC. |
After the inspection is complete, the BareMetalHost
object is created and available to be provisioned.
View available BareMetalHost
objects:
$ oc get bmh -n openshift-machine-api
NAME STATE CONSUMER ONLINE ERROR AGE
control-plane-1.example.com available control-plane-1 true 1h10m
control-plane-2.example.com externally provisioned control-plane-2 true 4h53m
control-plane-3.example.com externally provisioned control-plane-3 true 4h53m
compute-1.example.com provisioned compute-1-ktmmx true 4h53m
compute-1.example.com provisioned compute-2-l2zmb true 4h53m
There are no MachineSet
objects for control plane nodes, so you must create a Machine
object instead. You can copy the providerSpec
from another control plane Machine
object.
Create a Machine
object:
$ cat <<EOF | oc apply -f -
apiVersion: machine.openshift.io/v1beta1
kind: Machine
metadata:
annotations:
metal3.io/BareMetalHost: openshift-machine-api/control-plane-<num> (1)
labels:
machine.openshift.io/cluster-api-cluster: control-plane-<num> (1)
machine.openshift.io/cluster-api-machine-role: master
machine.openshift.io/cluster-api-machine-type: master
name: control-plane-<num> (1)
namespace: openshift-machine-api
spec:
metadata: {}
providerSpec:
value:
apiVersion: baremetal.cluster.k8s.io/v1alpha1
customDeploy:
method: install_coreos
hostSelector: {}
image:
checksum: ""
url: ""
kind: BareMetalMachineProviderSpec
metadata:
creationTimestamp: null
userData:
name: master-user-data-managed
EOF
1 | Replace <num> for the control plane number of the bare metal node in the name , labels and annotations fields. |
To view the BareMetalHost
objects, run the following command:
$ oc get bmh -A
NAME STATE CONSUMER ONLINE ERROR AGE
control-plane-1.example.com provisioned control-plane-1 true 2h53m
control-plane-2.example.com externally provisioned control-plane-2 true 5h53m
control-plane-3.example.com externally provisioned control-plane-3 true 5h53m
compute-1.example.com provisioned compute-1-ktmmx true 5h53m
compute-2.example.com provisioned compute-2-l2zmb true 5h53m
After the RHCOS installation, verify that the BareMetalHost
is added to the cluster:
$ oc get nodes
NAME STATUS ROLES AGE VERSION
control-plane-1.example.com available master 4m2s v1.18.2
control-plane-2.example.com available master 141m v1.18.2
control-plane-3.example.com available master 141m v1.18.2
compute-1.example.com available worker 87m v1.18.2
compute-2.example.com available worker 87m v1.18.2
After replacement of the new control plane node, the etcd pod running in the new node is in |
If the provisioning
network is enabled and you want to expand the cluster using Virtual Media on the baremetal
network, use the following procedure.
There is an existing cluster with a baremetal
network and a provisioning
network.
Edit the provisioning
custom resource (CR) to enable deploying with Virtual Media on the baremetal
network:
oc edit provisioning
apiVersion: metal3.io/v1alpha1
kind: Provisioning
metadata:
creationTimestamp: "2021-08-05T18:51:50Z"
finalizers:
- provisioning.metal3.io
generation: 8
name: provisioning-configuration
resourceVersion: "551591"
uid: f76e956f-24c6-4361-aa5b-feaf72c5b526
spec:
preProvisioningOSDownloadURLs: {}
provisioningDHCPRange: 172.22.0.10,172.22.0.254
provisioningIP: 172.22.0.3
provisioningInterface: enp1s0
provisioningNetwork: Managed
provisioningNetworkCIDR: 172.22.0.0/24
provisioningOSDownloadURL: http://192.168.111.1/images/rhcos-<version>.x86_64.qcow2.gz?sha256=<sha256>
virtualMediaViaExternalNetwork: true (1)
status:
generations:
- group: apps
hash: ""
lastGeneration: 7
name: metal3
namespace: openshift-machine-api
resource: deployments
- group: apps
hash: ""
lastGeneration: 1
name: metal3-image-cache
namespace: openshift-machine-api
resource: daemonsets
observedGeneration: 8
readyReplicas: 0
1 | Add virtualMediaViaExternalNetwork: true to the provisioning CR. |
Edit the machineset to use the API VIP address:
oc edit machineset
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
creationTimestamp: "2021-08-05T18:51:52Z"
generation: 11
labels:
machine.openshift.io/cluster-api-cluster: ostest-hwmdt
machine.openshift.io/cluster-api-machine-role: worker
machine.openshift.io/cluster-api-machine-type: worker
name: ostest-hwmdt-worker-0
namespace: openshift-machine-api
resourceVersion: "551513"
uid: fad1c6e0-b9da-4d4a-8d73-286f78788931
spec:
replicas: 2
selector:
matchLabels:
machine.openshift.io/cluster-api-cluster: ostest-hwmdt
machine.openshift.io/cluster-api-machineset: ostest-hwmdt-worker-0
template:
metadata:
labels:
machine.openshift.io/cluster-api-cluster: ostest-hwmdt
machine.openshift.io/cluster-api-machine-role: worker
machine.openshift.io/cluster-api-machine-type: worker
machine.openshift.io/cluster-api-machineset: ostest-hwmdt-worker-0
spec:
metadata: {}
providerSpec:
value:
apiVersion: baremetal.cluster.k8s.io/v1alpha1
hostSelector: {}
image:
checksum: http:/172.22.0.3:6181/images/rhcos-<version>.x86_64.qcow2.<md5sum> (1)
url: http://172.22.0.3:6181/images/rhcos-<version>.x86_64.qcow2 (2)
kind: BareMetalMachineProviderSpec
metadata:
creationTimestamp: null
userData:
name: worker-user-data
status:
availableReplicas: 2
fullyLabeledReplicas: 2
observedGeneration: 11
readyReplicas: 2
replicas: 2
1 | Edit the checksum URL to use the API VIP address. |
2 | Edit the url URL to use the API VIP address. |
If the MAC address of an existing bare-metal node in the cluster matches the MAC address of a bare-metal host you are attempting to add to the cluster, the Bare Metal Operator associates the host with the existing node. If the host enrollment, inspection, cleaning, or other Ironic steps fail, the Bare Metal Operator retries the installation continuously. A registration error is displayed for the failed bare-metal host.
You can diagnose a duplicate MAC address by examining the bare-metal hosts that are running in the openshift-machine-api
namespace.
Install an OpenShift Container Platform cluster on bare metal.
Install the OpenShift Container Platform CLI oc
.
Log in as a user with cluster-admin
privileges.
To determine whether a bare-metal host that fails provisioning has the same MAC address as an existing node, do the following:
Get the bare-metal hosts running in the openshift-machine-api
namespace:
$ oc get bmh -n openshift-machine-api
NAME STATUS PROVISIONING STATUS CONSUMER
openshift-master-0 OK externally provisioned openshift-zpwpq-master-0
openshift-master-1 OK externally provisioned openshift-zpwpq-master-1
openshift-master-2 OK externally provisioned openshift-zpwpq-master-2
openshift-worker-0 OK provisioned openshift-zpwpq-worker-0-lv84n
openshift-worker-1 OK provisioned openshift-zpwpq-worker-0-zd8lm
openshift-worker-2 error registering
To see more detailed information about the status of the failing host, run the following command replacing <bare_metal_host_name>
with the name of the host:
$ oc get -n openshift-machine-api bmh <bare_metal_host_name> -o yaml
...
status:
errorCount: 12
errorMessage: MAC address b4:96:91:1d:7c:20 conflicts with existing node openshift-worker-1
errorType: registration error
...
Provisioning the bare metal node requires executing the following procedure from the provisioner node.
Ensure the STATE
is ready
before provisioning the bare metal node.
$ oc -n openshift-machine-api get bmh openshift-worker-<num>
Where <num>
is the worker node number.
NAME STATE CONSUMER ONLINE ERROR
openshift-worker-<num> ready true
Get a count of the number of worker nodes.
$ oc get nodes
NAME STATUS ROLES AGE VERSION
provisioner.openshift.example.com Ready master 30h v1.22.1
openshift-master-1.openshift.example.com Ready master 30h v1.22.1
openshift-master-2.openshift.example.com Ready master 30h v1.22.1
openshift-master-3.openshift.example.com Ready master 30h v1.22.1
openshift-worker-0.openshift.example.com Ready master 30h v1.22.1
openshift-worker-1.openshift.example.com Ready master 30h v1.22.1
Get the machine set.
$ oc get machinesets -n openshift-machine-api
NAME DESIRED CURRENT READY AVAILABLE AGE
...
openshift-worker-0.example.com 1 1 1 1 55m
openshift-worker-1.example.com 1 1 1 1 55m
Increase the number of worker nodes by one.
$ oc scale --replicas=<num> machineset <machineset> -n openshift-machine-api
Replace <num>
with the new number of worker nodes. Replace <machineset>
with the name of the machine set from the previous step.
Check the status of the bare metal node.
$ oc -n openshift-machine-api get bmh openshift-worker-<num>
Where <num>
is the worker node number. The STATE changes from ready
to provisioning
.
NAME STATE CONSUMER ONLINE ERROR
openshift-worker-<num> provisioning openshift-worker-<num>-65tjz true
The provisioning
status remains until the OpenShift Container Platform cluster provisions the node. This can take 30 minutes or more. After the node is provisioned, the state will change to provisioned
.
NAME STATE CONSUMER ONLINE ERROR
openshift-worker-<num> provisioning openshift-worker-<num>-65tjz true
After provisioning completes, ensure the bare metal node is ready.
$ oc get nodes
NAME STATUS ROLES AGE VERSION
provisioner.openshift.example.com Ready master 30h v1.22.1
openshift-master-1.openshift.example.com Ready master 30h v1.22.1
openshift-master-2.openshift.example.com Ready master 30h v1.22.1
openshift-master-3.openshift.example.com Ready master 30h v1.22.1
openshift-worker-0.openshift.example.com Ready master 30h v1.22.1
openshift-worker-1.openshift.example.com Ready master 30h v1.22.1
openshift-worker-<num>.openshift.example.com Ready worker 3m27s v1.22.1
You can also check the kubelet.
$ ssh openshift-worker-<num>
[kni@openshift-worker-<num>]$ journalctl -fu kubelet